Mar 09 16:23:26.677979 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 09 16:23:27.225917 master-0 kubenswrapper[4090]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:23:27.225917 master-0 kubenswrapper[4090]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 09 16:23:27.225917 master-0 kubenswrapper[4090]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:23:27.227320 master-0 kubenswrapper[4090]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:23:27.227320 master-0 kubenswrapper[4090]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 09 16:23:27.227320 master-0 kubenswrapper[4090]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:23:27.227320 master-0 kubenswrapper[4090]: I0309 16:23:27.227027 4090 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.236954 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237017 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237023 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237028 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237033 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237038 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237044 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237049 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237055 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237060 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237065 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237070 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237075 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237080 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237084 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237090 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237094 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237099 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237104 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:23:27.237077 master-0 kubenswrapper[4090]: W0309 16:23:27.237108 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237113 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237117 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237121 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237126 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237131 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237136 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237141 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237147 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237152 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237158 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237166 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237172 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237188 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237193 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237198 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237202 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237208 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:23:27.238392 master-0 kubenswrapper[4090]: W0309 16:23:27.237214 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237219 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237224 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237229 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237236 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237241 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237245 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237250 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237254 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237266 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237271 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237276 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237281 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237286 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237291 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237296 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237305 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237311 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237316 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:23:27.239259 master-0 kubenswrapper[4090]: W0309 16:23:27.237321 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237327 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237331 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237335 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237339 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237343 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237347 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237351 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237356 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237360 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237365 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237370 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237374 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237379 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237384 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: W0309 16:23:27.237388 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238409 4090 flags.go:64] FLAG: --address="0.0.0.0" Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238453 4090 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238469 4090 flags.go:64] FLAG: --anonymous-auth="true" Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238478 4090 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238489 4090 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238495 4090 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 09 16:23:27.240213 master-0 kubenswrapper[4090]: I0309 16:23:27.238505 4090 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238513 4090 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238519 4090 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238525 4090 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238534 4090 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238540 4090 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238547 4090 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238556 4090 flags.go:64] FLAG: --cgroup-root="" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238562 4090 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238568 4090 flags.go:64] FLAG: --client-ca-file="" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238574 4090 flags.go:64] FLAG: --cloud-config="" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238580 4090 flags.go:64] FLAG: --cloud-provider="" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238587 4090 flags.go:64] FLAG: --cluster-dns="[]" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238594 4090 flags.go:64] FLAG: --cluster-domain="" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238600 4090 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238605 4090 flags.go:64] FLAG: --config-dir="" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238610 4090 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238617 4090 flags.go:64] FLAG: --container-log-max-files="5" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238624 4090 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238630 4090 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238635 4090 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238641 4090 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238647 4090 flags.go:64] FLAG: --contention-profiling="false" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238652 4090 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238659 4090 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 09 16:23:27.241275 master-0 kubenswrapper[4090]: I0309 16:23:27.238665 4090 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238671 4090 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238679 4090 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238687 4090 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238693 4090 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238699 4090 flags.go:64] FLAG: --enable-load-reader="false" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238705 4090 flags.go:64] FLAG: --enable-server="true" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.238710 4090 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239885 4090 flags.go:64] FLAG: --event-burst="100" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239892 4090 flags.go:64] FLAG: --event-qps="50" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239898 4090 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239905 4090 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239911 4090 flags.go:64] FLAG: --eviction-hard="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239920 4090 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239926 4090 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239934 4090 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239941 4090 flags.go:64] FLAG: --eviction-soft="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239946 4090 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239952 4090 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239958 4090 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239963 4090 flags.go:64] FLAG: --experimental-mounter-path="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239968 4090 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239974 4090 flags.go:64] FLAG: --fail-swap-on="true" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239979 4090 flags.go:64] FLAG: --feature-gates="" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239986 4090 flags.go:64] FLAG: --file-check-frequency="20s" Mar 09 16:23:27.242530 master-0 kubenswrapper[4090]: I0309 16:23:27.239991 4090 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.239997 4090 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240003 4090 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240009 4090 flags.go:64] FLAG: --healthz-port="10248" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240013 4090 flags.go:64] FLAG: --help="false" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240018 4090 flags.go:64] FLAG: --hostname-override="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240022 4090 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240027 4090 flags.go:64] FLAG: --http-check-frequency="20s" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240032 4090 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240036 4090 flags.go:64] FLAG: --image-credential-provider-config="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240046 4090 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240050 4090 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240054 4090 flags.go:64] FLAG: --image-service-endpoint="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240058 4090 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240063 4090 flags.go:64] FLAG: --kube-api-burst="100" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240067 4090 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240072 4090 flags.go:64] FLAG: --kube-api-qps="50" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240077 4090 flags.go:64] FLAG: --kube-reserved="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240082 4090 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240087 4090 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240092 4090 flags.go:64] FLAG: --kubelet-cgroups="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240098 4090 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240102 4090 flags.go:64] FLAG: --lock-file="" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240107 4090 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240112 4090 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 09 16:23:27.243919 master-0 kubenswrapper[4090]: I0309 16:23:27.240116 4090 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240135 4090 flags.go:64] FLAG: --log-json-split-stream="false" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240139 4090 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240144 4090 flags.go:64] FLAG: --log-text-split-stream="false" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240148 4090 flags.go:64] FLAG: --logging-format="text" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240153 4090 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240158 4090 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240163 4090 flags.go:64] FLAG: --manifest-url="" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240167 4090 flags.go:64] FLAG: --manifest-url-header="" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240174 4090 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240179 4090 flags.go:64] FLAG: --max-open-files="1000000" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240187 4090 flags.go:64] FLAG: --max-pods="110" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240193 4090 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240199 4090 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240204 4090 flags.go:64] FLAG: --memory-manager-policy="None" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240210 4090 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240215 4090 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240225 4090 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240230 4090 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240247 4090 flags.go:64] FLAG: --node-status-max-images="50" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240252 4090 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240257 4090 flags.go:64] FLAG: --oom-score-adj="-999" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240262 4090 flags.go:64] FLAG: --pod-cidr="" Mar 09 16:23:27.245382 master-0 kubenswrapper[4090]: I0309 16:23:27.240267 4090 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240276 4090 flags.go:64] FLAG: --pod-manifest-path="" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240281 4090 flags.go:64] FLAG: --pod-max-pids="-1" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240286 4090 flags.go:64] FLAG: --pods-per-core="0" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240290 4090 flags.go:64] FLAG: --port="10250" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240295 4090 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240301 4090 flags.go:64] FLAG: --provider-id="" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240306 4090 flags.go:64] FLAG: --qos-reserved="" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240310 4090 flags.go:64] FLAG: --read-only-port="10255" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240315 4090 flags.go:64] FLAG: --register-node="true" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240320 4090 flags.go:64] FLAG: --register-schedulable="true" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240326 4090 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240337 4090 flags.go:64] FLAG: --registry-burst="10" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240342 4090 flags.go:64] FLAG: --registry-qps="5" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240349 4090 flags.go:64] FLAG: --reserved-cpus="" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240354 4090 flags.go:64] FLAG: --reserved-memory="" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240361 4090 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240367 4090 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240372 4090 flags.go:64] FLAG: --rotate-certificates="false" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240378 4090 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240382 4090 flags.go:64] FLAG: --runonce="false" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240387 4090 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240392 4090 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240397 4090 flags.go:64] FLAG: --seccomp-default="false" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240402 4090 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240407 4090 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 09 16:23:27.246565 master-0 kubenswrapper[4090]: I0309 16:23:27.240417 4090 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240447 4090 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240454 4090 flags.go:64] FLAG: --storage-driver-password="root" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240459 4090 flags.go:64] FLAG: --storage-driver-secure="false" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240464 4090 flags.go:64] FLAG: --storage-driver-table="stats" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240469 4090 flags.go:64] FLAG: --storage-driver-user="root" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240475 4090 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240481 4090 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240486 4090 flags.go:64] FLAG: --system-cgroups="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240492 4090 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240502 4090 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240507 4090 flags.go:64] FLAG: --tls-cert-file="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240513 4090 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240519 4090 flags.go:64] FLAG: --tls-min-version="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240524 4090 flags.go:64] FLAG: --tls-private-key-file="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240528 4090 flags.go:64] FLAG: --topology-manager-policy="none" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240533 4090 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240537 4090 flags.go:64] FLAG: --topology-manager-scope="container" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240542 4090 flags.go:64] FLAG: --v="2" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240550 4090 flags.go:64] FLAG: --version="false" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240556 4090 flags.go:64] FLAG: --vmodule="" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240562 4090 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: I0309 16:23:27.240566 4090 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: W0309 16:23:27.240701 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:23:27.248133 master-0 kubenswrapper[4090]: W0309 16:23:27.240709 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240713 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240719 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240723 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240727 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240731 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240735 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240739 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240746 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240749 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240753 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240757 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240760 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240764 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240768 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240771 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240775 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240778 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240782 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240786 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:23:27.249675 master-0 kubenswrapper[4090]: W0309 16:23:27.240790 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240794 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240797 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240801 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240805 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240809 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240812 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240816 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240820 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240823 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240827 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240831 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240835 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240839 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240843 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240847 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240851 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240855 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240859 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240862 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:23:27.251278 master-0 kubenswrapper[4090]: W0309 16:23:27.240868 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240872 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240876 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240881 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240886 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240891 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240895 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240899 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240904 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240909 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240917 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240956 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240961 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240968 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240973 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240979 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240985 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240991 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.240995 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:23:27.252570 master-0 kubenswrapper[4090]: W0309 16:23:27.241000 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241004 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241008 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241012 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241016 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241020 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241025 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241029 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241033 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241038 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241042 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: W0309 16:23:27.241046 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:23:27.253611 master-0 kubenswrapper[4090]: I0309 16:23:27.241061 4090 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:23:27.254373 master-0 kubenswrapper[4090]: I0309 16:23:27.254303 4090 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 09 16:23:27.254373 master-0 kubenswrapper[4090]: I0309 16:23:27.254343 4090 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254419 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254440 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254446 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254450 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254455 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254461 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254468 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254473 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254478 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254484 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254488 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254493 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254497 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254501 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254505 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254509 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254512 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254516 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254520 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:23:27.254535 master-0 kubenswrapper[4090]: W0309 16:23:27.254524 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254528 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254532 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254536 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254541 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254547 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254554 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254561 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254569 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254575 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254581 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254586 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254593 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254599 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254603 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254609 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254614 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254618 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254623 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:23:27.255581 master-0 kubenswrapper[4090]: W0309 16:23:27.254628 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254633 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254638 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254643 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254648 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254652 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254657 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254661 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254665 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254670 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254674 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254678 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254682 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254687 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254692 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254698 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254702 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254707 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254711 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:23:27.256650 master-0 kubenswrapper[4090]: W0309 16:23:27.254715 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254719 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254723 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254727 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254731 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254734 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254738 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254742 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254746 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254749 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254753 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254756 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254760 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254764 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: W0309 16:23:27.254768 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:23:27.258587 master-0 kubenswrapper[4090]: I0309 16:23:27.254775 4090 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254898 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254904 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254908 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254912 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254916 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254920 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254924 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254928 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254932 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254936 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254940 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254945 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254949 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254954 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254959 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254964 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254969 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254974 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:23:27.259600 master-0 kubenswrapper[4090]: W0309 16:23:27.254978 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.254982 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.254986 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.254991 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.254994 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.254998 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255002 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255007 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255011 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255014 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255018 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255022 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255026 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255030 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255034 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255042 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255045 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255049 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255053 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255057 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:23:27.260590 master-0 kubenswrapper[4090]: W0309 16:23:27.255061 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255064 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255068 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255071 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255076 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255080 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255084 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255088 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255092 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255095 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255099 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255103 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255107 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255111 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255114 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255118 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255122 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255125 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255129 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255132 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:23:27.261835 master-0 kubenswrapper[4090]: W0309 16:23:27.255136 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255140 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255144 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255148 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255152 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255156 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255160 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255163 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255167 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255170 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255174 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255179 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255184 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: W0309 16:23:27.255189 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:23:27.262869 master-0 kubenswrapper[4090]: I0309 16:23:27.255195 4090 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:23:27.263634 master-0 kubenswrapper[4090]: I0309 16:23:27.255469 4090 server.go:940] "Client rotation is on, will bootstrap in background" Mar 09 16:23:27.263634 master-0 kubenswrapper[4090]: I0309 16:23:27.257957 4090 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 09 16:23:27.263634 master-0 kubenswrapper[4090]: I0309 16:23:27.259089 4090 server.go:997] "Starting client certificate rotation" Mar 09 16:23:27.263634 master-0 kubenswrapper[4090]: I0309 16:23:27.259113 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 09 16:23:27.263634 master-0 kubenswrapper[4090]: I0309 16:23:27.259468 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 16:23:27.282517 master-0 kubenswrapper[4090]: I0309 16:23:27.282402 4090 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:23:27.287180 master-0 kubenswrapper[4090]: I0309 16:23:27.287110 4090 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:23:27.288260 master-0 kubenswrapper[4090]: E0309 16:23:27.288161 4090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:27.307563 master-0 kubenswrapper[4090]: I0309 16:23:27.307475 4090 log.go:25] "Validated CRI v1 runtime API" Mar 09 16:23:27.315857 master-0 kubenswrapper[4090]: I0309 16:23:27.315678 4090 log.go:25] "Validated CRI v1 image API" Mar 09 16:23:27.318789 master-0 kubenswrapper[4090]: I0309 16:23:27.318729 4090 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 09 16:23:27.323936 master-0 kubenswrapper[4090]: I0309 16:23:27.323886 4090 fs.go:135] Filesystem UUIDs: map[4d92f182-6acb-4a41-8103-6903266f66d5:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 09 16:23:27.324034 master-0 kubenswrapper[4090]: I0309 16:23:27.323926 4090 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 09 16:23:27.341695 master-0 kubenswrapper[4090]: I0309 16:23:27.341339 4090 manager.go:217] Machine: {Timestamp:2026-03-09 16:23:27.339196499 +0000 UTC m=+0.514511508 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:f32a84ce369a40d4b790587e3ee415c9 SystemUUID:f32a84ce-369a-40d4-b790-587e3ee415c9 BootID:14726782-964f-4d13-8ec1-f1921737ccdf Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:6a:59:6a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:5c:d5:0d Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:32:fa:9a:e0:19:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 09 16:23:27.341695 master-0 kubenswrapper[4090]: I0309 16:23:27.341650 4090 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 09 16:23:27.341854 master-0 kubenswrapper[4090]: I0309 16:23:27.341830 4090 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 09 16:23:27.343416 master-0 kubenswrapper[4090]: I0309 16:23:27.343362 4090 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 09 16:23:27.343780 master-0 kubenswrapper[4090]: I0309 16:23:27.343713 4090 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 09 16:23:27.344119 master-0 kubenswrapper[4090]: I0309 16:23:27.343777 4090 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 09 16:23:27.344189 master-0 kubenswrapper[4090]: I0309 16:23:27.344148 4090 topology_manager.go:138] "Creating topology manager with none policy" Mar 09 16:23:27.344189 master-0 kubenswrapper[4090]: I0309 16:23:27.344171 4090 container_manager_linux.go:303] "Creating device plugin manager" Mar 09 16:23:27.344357 master-0 kubenswrapper[4090]: I0309 16:23:27.344320 4090 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 09 16:23:27.344397 master-0 kubenswrapper[4090]: I0309 16:23:27.344372 4090 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 09 16:23:27.345324 master-0 kubenswrapper[4090]: I0309 16:23:27.345277 4090 state_mem.go:36] "Initialized new in-memory state store" Mar 09 16:23:27.345489 master-0 kubenswrapper[4090]: I0309 16:23:27.345418 4090 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 09 16:23:27.348628 master-0 kubenswrapper[4090]: I0309 16:23:27.348586 4090 kubelet.go:418] "Attempting to sync node with API server" Mar 09 16:23:27.348628 master-0 kubenswrapper[4090]: I0309 16:23:27.348621 4090 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 09 16:23:27.348717 master-0 kubenswrapper[4090]: I0309 16:23:27.348646 4090 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 09 16:23:27.348717 master-0 kubenswrapper[4090]: I0309 16:23:27.348667 4090 kubelet.go:324] "Adding apiserver pod source" Mar 09 16:23:27.348717 master-0 kubenswrapper[4090]: I0309 16:23:27.348685 4090 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 09 16:23:27.353325 master-0 kubenswrapper[4090]: I0309 16:23:27.353286 4090 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 09 16:23:27.354388 master-0 kubenswrapper[4090]: W0309 16:23:27.354318 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:27.354455 master-0 kubenswrapper[4090]: E0309 16:23:27.354406 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:27.354487 master-0 kubenswrapper[4090]: W0309 16:23:27.354361 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:27.354595 master-0 kubenswrapper[4090]: E0309 16:23:27.354536 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:27.356313 master-0 kubenswrapper[4090]: I0309 16:23:27.356281 4090 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 09 16:23:27.356560 master-0 kubenswrapper[4090]: I0309 16:23:27.356524 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 09 16:23:27.356560 master-0 kubenswrapper[4090]: I0309 16:23:27.356553 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 09 16:23:27.356560 master-0 kubenswrapper[4090]: I0309 16:23:27.356564 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356576 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356586 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356595 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356605 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356632 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356643 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356652 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 09 16:23:27.356665 master-0 kubenswrapper[4090]: I0309 16:23:27.356669 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 09 16:23:27.356855 master-0 kubenswrapper[4090]: I0309 16:23:27.356751 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 09 16:23:27.357665 master-0 kubenswrapper[4090]: I0309 16:23:27.357632 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 09 16:23:27.358171 master-0 kubenswrapper[4090]: I0309 16:23:27.358137 4090 server.go:1280] "Started kubelet" Mar 09 16:23:27.359398 master-0 kubenswrapper[4090]: I0309 16:23:27.359332 4090 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 09 16:23:27.359723 master-0 kubenswrapper[4090]: I0309 16:23:27.359404 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:27.359781 master-0 kubenswrapper[4090]: I0309 16:23:27.359616 4090 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 09 16:23:27.359748 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 09 16:23:27.359915 master-0 kubenswrapper[4090]: I0309 16:23:27.359799 4090 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 09 16:23:27.360472 master-0 kubenswrapper[4090]: I0309 16:23:27.360409 4090 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 09 16:23:27.365239 master-0 kubenswrapper[4090]: I0309 16:23:27.365203 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 09 16:23:27.365299 master-0 kubenswrapper[4090]: I0309 16:23:27.365252 4090 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 09 16:23:27.366083 master-0 kubenswrapper[4090]: I0309 16:23:27.365941 4090 server.go:449] "Adding debug handlers to kubelet server" Mar 09 16:23:27.366596 master-0 kubenswrapper[4090]: I0309 16:23:27.366567 4090 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 09 16:23:27.366596 master-0 kubenswrapper[4090]: I0309 16:23:27.366596 4090 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 09 16:23:27.366741 master-0 kubenswrapper[4090]: E0309 16:23:27.366689 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:23:27.366874 master-0 kubenswrapper[4090]: I0309 16:23:27.366835 4090 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 09 16:23:27.367722 master-0 kubenswrapper[4090]: I0309 16:23:27.367701 4090 reconstruct.go:97] "Volume reconstruction finished" Mar 09 16:23:27.367722 master-0 kubenswrapper[4090]: I0309 16:23:27.367719 4090 reconciler.go:26] "Reconciler: start to sync state" Mar 09 16:23:27.369266 master-0 kubenswrapper[4090]: I0309 16:23:27.369171 4090 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 09 16:23:27.369266 master-0 kubenswrapper[4090]: I0309 16:23:27.369231 4090 factory.go:55] Registering systemd factory Mar 09 16:23:27.369266 master-0 kubenswrapper[4090]: I0309 16:23:27.369240 4090 factory.go:221] Registration of the systemd container factory successfully Mar 09 16:23:27.370694 master-0 kubenswrapper[4090]: E0309 16:23:27.367022 4090 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189b38deae457f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.358107453 +0000 UTC m=+0.533422452,LastTimestamp:2026-03-09 16:23:27.358107453 +0000 UTC m=+0.533422452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:27.370694 master-0 kubenswrapper[4090]: I0309 16:23:27.370632 4090 factory.go:153] Registering CRI-O factory Mar 09 16:23:27.370918 master-0 kubenswrapper[4090]: I0309 16:23:27.370707 4090 factory.go:221] Registration of the crio container factory successfully Mar 09 16:23:27.370918 master-0 kubenswrapper[4090]: I0309 16:23:27.370752 4090 factory.go:103] Registering Raw factory Mar 09 16:23:27.370918 master-0 kubenswrapper[4090]: I0309 16:23:27.370778 4090 manager.go:1196] Started watching for new ooms in manager Mar 09 16:23:27.372115 master-0 kubenswrapper[4090]: I0309 16:23:27.372073 4090 manager.go:319] Starting recovery of all containers Mar 09 16:23:27.372640 master-0 kubenswrapper[4090]: E0309 16:23:27.369373 4090 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 09 16:23:27.372833 master-0 kubenswrapper[4090]: E0309 16:23:27.372755 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 09 16:23:27.372953 master-0 kubenswrapper[4090]: W0309 16:23:27.372826 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:27.373049 master-0 kubenswrapper[4090]: E0309 16:23:27.373000 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:27.403888 master-0 kubenswrapper[4090]: I0309 16:23:27.403776 4090 manager.go:324] Recovery completed Mar 09 16:23:27.416316 master-0 kubenswrapper[4090]: I0309 16:23:27.416268 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.418475 master-0 kubenswrapper[4090]: I0309 16:23:27.418440 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.418536 master-0 kubenswrapper[4090]: I0309 16:23:27.418487 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.418536 master-0 kubenswrapper[4090]: I0309 16:23:27.418498 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.419635 master-0 kubenswrapper[4090]: I0309 16:23:27.419605 4090 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 09 16:23:27.419635 master-0 kubenswrapper[4090]: I0309 16:23:27.419625 4090 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 09 16:23:27.419708 master-0 kubenswrapper[4090]: I0309 16:23:27.419647 4090 state_mem.go:36] "Initialized new in-memory state store" Mar 09 16:23:27.422812 master-0 kubenswrapper[4090]: I0309 16:23:27.422781 4090 policy_none.go:49] "None policy: Start" Mar 09 16:23:27.423355 master-0 kubenswrapper[4090]: I0309 16:23:27.423332 4090 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 09 16:23:27.423736 master-0 kubenswrapper[4090]: I0309 16:23:27.423377 4090 state_mem.go:35] "Initializing new in-memory state store" Mar 09 16:23:27.467450 master-0 kubenswrapper[4090]: E0309 16:23:27.467388 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:23:27.478829 master-0 kubenswrapper[4090]: I0309 16:23:27.478757 4090 manager.go:334] "Starting Device Plugin manager" Mar 09 16:23:27.478829 master-0 kubenswrapper[4090]: I0309 16:23:27.478815 4090 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 09 16:23:27.478829 master-0 kubenswrapper[4090]: I0309 16:23:27.478828 4090 server.go:79] "Starting device plugin registration server" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.479231 4090 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.479245 4090 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.479440 4090 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.479675 4090 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.479689 4090 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: E0309 16:23:27.482164 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.505130 4090 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.507077 4090 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.507133 4090 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: I0309 16:23:27.507160 4090 kubelet.go:2335] "Starting kubelet main sync loop" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: E0309 16:23:27.507208 4090 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: W0309 16:23:27.508343 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:27.514407 master-0 kubenswrapper[4090]: E0309 16:23:27.508385 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:27.574679 master-0 kubenswrapper[4090]: E0309 16:23:27.574562 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 09 16:23:27.579704 master-0 kubenswrapper[4090]: I0309 16:23:27.579652 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.580841 master-0 kubenswrapper[4090]: I0309 16:23:27.580816 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.580985 master-0 kubenswrapper[4090]: I0309 16:23:27.580851 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.580985 master-0 kubenswrapper[4090]: I0309 16:23:27.580860 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.580985 master-0 kubenswrapper[4090]: I0309 16:23:27.580905 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:27.582091 master-0 kubenswrapper[4090]: E0309 16:23:27.581999 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 09 16:23:27.608395 master-0 kubenswrapper[4090]: I0309 16:23:27.608219 4090 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:23:27.608683 master-0 kubenswrapper[4090]: I0309 16:23:27.608490 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.610236 master-0 kubenswrapper[4090]: I0309 16:23:27.610200 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.610442 master-0 kubenswrapper[4090]: I0309 16:23:27.610249 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.610442 master-0 kubenswrapper[4090]: I0309 16:23:27.610262 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.610442 master-0 kubenswrapper[4090]: I0309 16:23:27.610415 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.611650 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.611684 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.611694 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.611789 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.611643 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.612145 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.612500 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.612491 master-0 kubenswrapper[4090]: I0309 16:23:27.612523 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.613355 master-0 kubenswrapper[4090]: I0309 16:23:27.613293 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.613355 master-0 kubenswrapper[4090]: I0309 16:23:27.613319 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.613355 master-0 kubenswrapper[4090]: I0309 16:23:27.613328 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.613652 master-0 kubenswrapper[4090]: I0309 16:23:27.613476 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.613652 master-0 kubenswrapper[4090]: I0309 16:23:27.613497 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.613652 master-0 kubenswrapper[4090]: I0309 16:23:27.613522 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.613652 master-0 kubenswrapper[4090]: I0309 16:23:27.613533 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.613652 master-0 kubenswrapper[4090]: I0309 16:23:27.613585 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.613664 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.613699 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.613796 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.613879 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.614159 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.614188 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.614229 master-0 kubenswrapper[4090]: I0309 16:23:27.614200 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.614935 master-0 kubenswrapper[4090]: I0309 16:23:27.614291 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.614935 master-0 kubenswrapper[4090]: I0309 16:23:27.614638 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.614935 master-0 kubenswrapper[4090]: I0309 16:23:27.614900 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.615231 master-0 kubenswrapper[4090]: I0309 16:23:27.615203 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.615231 master-0 kubenswrapper[4090]: I0309 16:23:27.615232 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.615231 master-0 kubenswrapper[4090]: I0309 16:23:27.615241 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.615503 master-0 kubenswrapper[4090]: I0309 16:23:27.615455 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.615503 master-0 kubenswrapper[4090]: I0309 16:23:27.615466 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.615503 master-0 kubenswrapper[4090]: I0309 16:23:27.615473 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.615676 master-0 kubenswrapper[4090]: I0309 16:23:27.615615 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.615676 master-0 kubenswrapper[4090]: I0309 16:23:27.615648 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.616182 master-0 kubenswrapper[4090]: I0309 16:23:27.616156 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.616274 master-0 kubenswrapper[4090]: I0309 16:23:27.616195 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.616274 master-0 kubenswrapper[4090]: I0309 16:23:27.616206 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.616500 master-0 kubenswrapper[4090]: I0309 16:23:27.616473 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.616606 master-0 kubenswrapper[4090]: I0309 16:23:27.616507 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.616606 master-0 kubenswrapper[4090]: I0309 16:23:27.616520 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.669732 master-0 kubenswrapper[4090]: I0309 16:23:27.669665 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.669986 master-0 kubenswrapper[4090]: I0309 16:23:27.669828 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.669986 master-0 kubenswrapper[4090]: I0309 16:23:27.669856 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.669986 master-0 kubenswrapper[4090]: I0309 16:23:27.669877 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.669986 master-0 kubenswrapper[4090]: I0309 16:23:27.669896 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.669986 master-0 kubenswrapper[4090]: I0309 16:23:27.669913 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.669986 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670029 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670063 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670090 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670112 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670129 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670160 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.670227 master-0 kubenswrapper[4090]: I0309 16:23:27.670178 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.670550 master-0 kubenswrapper[4090]: I0309 16:23:27.670266 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.670550 master-0 kubenswrapper[4090]: I0309 16:23:27.670377 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.670550 master-0 kubenswrapper[4090]: I0309 16:23:27.670508 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.728394 master-0 kubenswrapper[4090]: E0309 16:23:27.728162 4090 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189b38deae457f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.358107453 +0000 UTC m=+0.533422452,LastTimestamp:2026-03-09 16:23:27.358107453 +0000 UTC m=+0.533422452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:27.771938 master-0 kubenswrapper[4090]: I0309 16:23:27.771763 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.771938 master-0 kubenswrapper[4090]: I0309 16:23:27.771844 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.771938 master-0 kubenswrapper[4090]: I0309 16:23:27.771872 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.771938 master-0 kubenswrapper[4090]: I0309 16:23:27.771896 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772247 master-0 kubenswrapper[4090]: I0309 16:23:27.772032 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772247 master-0 kubenswrapper[4090]: I0309 16:23:27.772080 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772328 master-0 kubenswrapper[4090]: I0309 16:23:27.772232 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.772368 master-0 kubenswrapper[4090]: I0309 16:23:27.772304 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.772368 master-0 kubenswrapper[4090]: I0309 16:23:27.772276 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772368 master-0 kubenswrapper[4090]: I0309 16:23:27.772343 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772501 master-0 kubenswrapper[4090]: I0309 16:23:27.772250 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772501 master-0 kubenswrapper[4090]: I0309 16:23:27.772392 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772501 master-0 kubenswrapper[4090]: I0309 16:23:27.772375 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772501 master-0 kubenswrapper[4090]: I0309 16:23:27.772457 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772501 master-0 kubenswrapper[4090]: I0309 16:23:27.772483 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772501 master-0 kubenswrapper[4090]: I0309 16:23:27.772490 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772662 master-0 kubenswrapper[4090]: I0309 16:23:27.772558 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772662 master-0 kubenswrapper[4090]: I0309 16:23:27.772608 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.772662 master-0 kubenswrapper[4090]: I0309 16:23:27.772635 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.772662 master-0 kubenswrapper[4090]: I0309 16:23:27.772658 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772699 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772702 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772724 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772734 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772750 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772759 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772775 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772804 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772807 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772812 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.772827 master-0 kubenswrapper[4090]: I0309 16:23:27.772832 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.773108 master-0 kubenswrapper[4090]: I0309 16:23:27.772827 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.773108 master-0 kubenswrapper[4090]: I0309 16:23:27.772859 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.773108 master-0 kubenswrapper[4090]: I0309 16:23:27.772877 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.783305 master-0 kubenswrapper[4090]: I0309 16:23:27.782926 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:27.784393 master-0 kubenswrapper[4090]: I0309 16:23:27.784363 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:27.784393 master-0 kubenswrapper[4090]: I0309 16:23:27.784402 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:27.784531 master-0 kubenswrapper[4090]: I0309 16:23:27.784412 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:27.784531 master-0 kubenswrapper[4090]: I0309 16:23:27.784490 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:27.785575 master-0 kubenswrapper[4090]: E0309 16:23:27.785504 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 09 16:23:27.938674 master-0 kubenswrapper[4090]: I0309 16:23:27.938563 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:23:27.947189 master-0 kubenswrapper[4090]: I0309 16:23:27.947150 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:23:27.964255 master-0 kubenswrapper[4090]: I0309 16:23:27.964176 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:27.976075 master-0 kubenswrapper[4090]: E0309 16:23:27.975988 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 09 16:23:27.993345 master-0 kubenswrapper[4090]: I0309 16:23:27.993239 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:28.001767 master-0 kubenswrapper[4090]: I0309 16:23:28.001685 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:23:28.185838 master-0 kubenswrapper[4090]: I0309 16:23:28.185630 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:28.187191 master-0 kubenswrapper[4090]: I0309 16:23:28.186863 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:28.187191 master-0 kubenswrapper[4090]: I0309 16:23:28.186930 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:28.187191 master-0 kubenswrapper[4090]: I0309 16:23:28.186946 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:28.187191 master-0 kubenswrapper[4090]: I0309 16:23:28.187022 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:28.188110 master-0 kubenswrapper[4090]: E0309 16:23:28.188058 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 09 16:23:28.361502 master-0 kubenswrapper[4090]: I0309 16:23:28.361354 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:28.566604 master-0 kubenswrapper[4090]: W0309 16:23:28.566318 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:28.566604 master-0 kubenswrapper[4090]: E0309 16:23:28.566413 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:28.570890 master-0 kubenswrapper[4090]: W0309 16:23:28.570665 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:28.570890 master-0 kubenswrapper[4090]: E0309 16:23:28.570880 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:28.590303 master-0 kubenswrapper[4090]: W0309 16:23:28.590219 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea WatchSource:0}: Error finding container 24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea: Status 404 returned error can't find the container with id 24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea Mar 09 16:23:28.591671 master-0 kubenswrapper[4090]: W0309 16:23:28.591593 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d WatchSource:0}: Error finding container b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d: Status 404 returned error can't find the container with id b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d Mar 09 16:23:28.592913 master-0 kubenswrapper[4090]: W0309 16:23:28.592883 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc WatchSource:0}: Error finding container 20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc: Status 404 returned error can't find the container with id 20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc Mar 09 16:23:28.597824 master-0 kubenswrapper[4090]: I0309 16:23:28.597789 4090 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 16:23:28.618033 master-0 kubenswrapper[4090]: W0309 16:23:28.617923 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988 WatchSource:0}: Error finding container 618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988: Status 404 returned error can't find the container with id 618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988 Mar 09 16:23:28.647466 master-0 kubenswrapper[4090]: W0309 16:23:28.647384 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136 WatchSource:0}: Error finding container 07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136: Status 404 returned error can't find the container with id 07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136 Mar 09 16:23:28.693974 master-0 kubenswrapper[4090]: W0309 16:23:28.693822 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:28.693974 master-0 kubenswrapper[4090]: E0309 16:23:28.693939 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:28.778012 master-0 kubenswrapper[4090]: E0309 16:23:28.777916 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 09 16:23:28.943890 master-0 kubenswrapper[4090]: W0309 16:23:28.943738 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:28.943890 master-0 kubenswrapper[4090]: E0309 16:23:28.943818 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:28.988764 master-0 kubenswrapper[4090]: I0309 16:23:28.988690 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:28.990615 master-0 kubenswrapper[4090]: I0309 16:23:28.990586 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:28.990696 master-0 kubenswrapper[4090]: I0309 16:23:28.990619 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:28.990696 master-0 kubenswrapper[4090]: I0309 16:23:28.990629 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:28.990696 master-0 kubenswrapper[4090]: I0309 16:23:28.990673 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:28.991685 master-0 kubenswrapper[4090]: E0309 16:23:28.991623 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 09 16:23:29.361633 master-0 kubenswrapper[4090]: I0309 16:23:29.361498 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:29.487796 master-0 kubenswrapper[4090]: I0309 16:23:29.487736 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 16:23:29.489471 master-0 kubenswrapper[4090]: E0309 16:23:29.489387 4090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:29.514641 master-0 kubenswrapper[4090]: I0309 16:23:29.514511 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136"} Mar 09 16:23:29.516124 master-0 kubenswrapper[4090]: I0309 16:23:29.516094 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988"} Mar 09 16:23:29.517148 master-0 kubenswrapper[4090]: I0309 16:23:29.517046 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea"} Mar 09 16:23:29.518081 master-0 kubenswrapper[4090]: I0309 16:23:29.518052 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc"} Mar 09 16:23:29.519570 master-0 kubenswrapper[4090]: I0309 16:23:29.519524 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d"} Mar 09 16:23:30.361550 master-0 kubenswrapper[4090]: I0309 16:23:30.361483 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:30.379973 master-0 kubenswrapper[4090]: E0309 16:23:30.379897 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 09 16:23:30.592156 master-0 kubenswrapper[4090]: I0309 16:23:30.592108 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:30.593267 master-0 kubenswrapper[4090]: I0309 16:23:30.593236 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:30.593318 master-0 kubenswrapper[4090]: I0309 16:23:30.593283 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:30.593318 master-0 kubenswrapper[4090]: I0309 16:23:30.593295 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:30.593375 master-0 kubenswrapper[4090]: I0309 16:23:30.593344 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:30.594216 master-0 kubenswrapper[4090]: E0309 16:23:30.594181 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 09 16:23:30.637087 master-0 kubenswrapper[4090]: W0309 16:23:30.636987 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:30.637087 master-0 kubenswrapper[4090]: E0309 16:23:30.637058 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:30.970989 master-0 kubenswrapper[4090]: W0309 16:23:30.970829 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:30.970989 master-0 kubenswrapper[4090]: E0309 16:23:30.970938 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:31.102383 master-0 kubenswrapper[4090]: W0309 16:23:31.102317 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:31.102583 master-0 kubenswrapper[4090]: E0309 16:23:31.102411 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:31.361210 master-0 kubenswrapper[4090]: I0309 16:23:31.361089 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:31.618420 master-0 kubenswrapper[4090]: W0309 16:23:31.618305 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:31.618420 master-0 kubenswrapper[4090]: E0309 16:23:31.618363 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:32.360761 master-0 kubenswrapper[4090]: I0309 16:23:32.360713 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:32.528589 master-0 kubenswrapper[4090]: I0309 16:23:32.528193 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e"} Mar 09 16:23:32.530218 master-0 kubenswrapper[4090]: I0309 16:23:32.530135 4090 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="3e490c33eb237e7928c7acb6c95a66dd05db37a72075c92f51066f56e730d5ab" exitCode=0 Mar 09 16:23:32.530308 master-0 kubenswrapper[4090]: I0309 16:23:32.530223 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"3e490c33eb237e7928c7acb6c95a66dd05db37a72075c92f51066f56e730d5ab"} Mar 09 16:23:32.530308 master-0 kubenswrapper[4090]: I0309 16:23:32.530263 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:32.530930 master-0 kubenswrapper[4090]: I0309 16:23:32.530895 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:32.531010 master-0 kubenswrapper[4090]: I0309 16:23:32.530934 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:32.531010 master-0 kubenswrapper[4090]: I0309 16:23:32.530944 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:33.362176 master-0 kubenswrapper[4090]: I0309 16:23:33.362056 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:33.533982 master-0 kubenswrapper[4090]: I0309 16:23:33.533907 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093"} Mar 09 16:23:33.533982 master-0 kubenswrapper[4090]: I0309 16:23:33.533967 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:33.535448 master-0 kubenswrapper[4090]: I0309 16:23:33.534824 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:33.535448 master-0 kubenswrapper[4090]: I0309 16:23:33.534870 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:33.535448 master-0 kubenswrapper[4090]: I0309 16:23:33.534883 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:33.536660 master-0 kubenswrapper[4090]: I0309 16:23:33.536628 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 09 16:23:33.537163 master-0 kubenswrapper[4090]: I0309 16:23:33.537104 4090 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="9fda9325c21a9754b0f9e5ebeb43ab767ea9e413879f620d22460b870d595210" exitCode=1 Mar 09 16:23:33.537163 master-0 kubenswrapper[4090]: I0309 16:23:33.537137 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"9fda9325c21a9754b0f9e5ebeb43ab767ea9e413879f620d22460b870d595210"} Mar 09 16:23:33.537242 master-0 kubenswrapper[4090]: I0309 16:23:33.537197 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:33.537894 master-0 kubenswrapper[4090]: I0309 16:23:33.537872 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:33.537894 master-0 kubenswrapper[4090]: I0309 16:23:33.537899 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:33.537978 master-0 kubenswrapper[4090]: I0309 16:23:33.537908 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:33.538176 master-0 kubenswrapper[4090]: I0309 16:23:33.538153 4090 scope.go:117] "RemoveContainer" containerID="9fda9325c21a9754b0f9e5ebeb43ab767ea9e413879f620d22460b870d595210" Mar 09 16:23:33.581793 master-0 kubenswrapper[4090]: E0309 16:23:33.581736 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 09 16:23:33.613030 master-0 kubenswrapper[4090]: I0309 16:23:33.612937 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 16:23:33.613949 master-0 kubenswrapper[4090]: E0309 16:23:33.613900 4090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:33.795882 master-0 kubenswrapper[4090]: I0309 16:23:33.795318 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:33.796344 master-0 kubenswrapper[4090]: I0309 16:23:33.796308 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:33.796400 master-0 kubenswrapper[4090]: I0309 16:23:33.796362 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:33.796400 master-0 kubenswrapper[4090]: I0309 16:23:33.796380 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:33.796541 master-0 kubenswrapper[4090]: I0309 16:23:33.796475 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:33.797335 master-0 kubenswrapper[4090]: E0309 16:23:33.797292 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 09 16:23:34.361624 master-0 kubenswrapper[4090]: I0309 16:23:34.361567 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:34.538603 master-0 kubenswrapper[4090]: I0309 16:23:34.538550 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:34.539577 master-0 kubenswrapper[4090]: I0309 16:23:34.539272 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:34.539577 master-0 kubenswrapper[4090]: I0309 16:23:34.539301 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:34.539577 master-0 kubenswrapper[4090]: I0309 16:23:34.539311 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:35.361062 master-0 kubenswrapper[4090]: I0309 16:23:35.360993 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:36.100165 master-0 kubenswrapper[4090]: W0309 16:23:36.099991 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:36.100165 master-0 kubenswrapper[4090]: E0309 16:23:36.100120 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:36.361398 master-0 kubenswrapper[4090]: I0309 16:23:36.361313 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:36.547916 master-0 kubenswrapper[4090]: I0309 16:23:36.547838 4090 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22" exitCode=0 Mar 09 16:23:36.547916 master-0 kubenswrapper[4090]: I0309 16:23:36.547920 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22"} Mar 09 16:23:36.548145 master-0 kubenswrapper[4090]: I0309 16:23:36.548013 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:36.548796 master-0 kubenswrapper[4090]: I0309 16:23:36.548767 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:36.548796 master-0 kubenswrapper[4090]: I0309 16:23:36.548789 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:36.548796 master-0 kubenswrapper[4090]: I0309 16:23:36.548797 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:36.550616 master-0 kubenswrapper[4090]: I0309 16:23:36.550588 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 09 16:23:36.551110 master-0 kubenswrapper[4090]: I0309 16:23:36.551080 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 09 16:23:36.551785 master-0 kubenswrapper[4090]: I0309 16:23:36.551730 4090 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="34ba6f4206d152cbab665ff7e5cdbee73846c546c16f056560194b7119321fa3" exitCode=1 Mar 09 16:23:36.551851 master-0 kubenswrapper[4090]: I0309 16:23:36.551792 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"34ba6f4206d152cbab665ff7e5cdbee73846c546c16f056560194b7119321fa3"} Mar 09 16:23:36.551851 master-0 kubenswrapper[4090]: I0309 16:23:36.551829 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:36.551946 master-0 kubenswrapper[4090]: I0309 16:23:36.551894 4090 scope.go:117] "RemoveContainer" containerID="9fda9325c21a9754b0f9e5ebeb43ab767ea9e413879f620d22460b870d595210" Mar 09 16:23:36.552481 master-0 kubenswrapper[4090]: I0309 16:23:36.552458 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:36.553690 master-0 kubenswrapper[4090]: I0309 16:23:36.553625 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:36.553752 master-0 kubenswrapper[4090]: I0309 16:23:36.553700 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:36.553752 master-0 kubenswrapper[4090]: I0309 16:23:36.553721 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:36.553752 master-0 kubenswrapper[4090]: I0309 16:23:36.553658 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:36.553851 master-0 kubenswrapper[4090]: I0309 16:23:36.553759 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:36.553851 master-0 kubenswrapper[4090]: I0309 16:23:36.553776 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:36.554231 master-0 kubenswrapper[4090]: I0309 16:23:36.554172 4090 scope.go:117] "RemoveContainer" containerID="34ba6f4206d152cbab665ff7e5cdbee73846c546c16f056560194b7119321fa3" Mar 09 16:23:36.554401 master-0 kubenswrapper[4090]: E0309 16:23:36.554360 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 09 16:23:36.554401 master-0 kubenswrapper[4090]: I0309 16:23:36.554381 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"e8fcbf086ed08a14966a423a93930e67c1cbd9793017fcea8581f23478898eea"} Mar 09 16:23:36.554586 master-0 kubenswrapper[4090]: I0309 16:23:36.554537 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:36.555558 master-0 kubenswrapper[4090]: I0309 16:23:36.555525 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:36.555558 master-0 kubenswrapper[4090]: I0309 16:23:36.555553 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:36.555639 master-0 kubenswrapper[4090]: I0309 16:23:36.555566 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:36.557239 master-0 kubenswrapper[4090]: I0309 16:23:36.557192 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"d44b4281b666f32d8647c6a143f074eebe2a44e65e8dee2574808efbf233ffa9"} Mar 09 16:23:36.622836 master-0 kubenswrapper[4090]: W0309 16:23:36.622729 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 09 16:23:36.622836 master-0 kubenswrapper[4090]: E0309 16:23:36.622837 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 09 16:23:37.483251 master-0 kubenswrapper[4090]: E0309 16:23:37.483198 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 09 16:23:37.564317 master-0 kubenswrapper[4090]: I0309 16:23:37.564237 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c"} Mar 09 16:23:37.565759 master-0 kubenswrapper[4090]: I0309 16:23:37.565730 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 09 16:23:37.568589 master-0 kubenswrapper[4090]: I0309 16:23:37.568532 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:37.568837 master-0 kubenswrapper[4090]: I0309 16:23:37.568685 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:37.569923 master-0 kubenswrapper[4090]: I0309 16:23:37.569886 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:37.570014 master-0 kubenswrapper[4090]: I0309 16:23:37.569929 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:37.570014 master-0 kubenswrapper[4090]: I0309 16:23:37.569943 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:37.571065 master-0 kubenswrapper[4090]: I0309 16:23:37.571034 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:37.571065 master-0 kubenswrapper[4090]: I0309 16:23:37.571061 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:37.571150 master-0 kubenswrapper[4090]: I0309 16:23:37.571139 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:37.572388 master-0 kubenswrapper[4090]: I0309 16:23:37.572235 4090 scope.go:117] "RemoveContainer" containerID="34ba6f4206d152cbab665ff7e5cdbee73846c546c16f056560194b7119321fa3" Mar 09 16:23:37.572540 master-0 kubenswrapper[4090]: E0309 16:23:37.572489 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 09 16:23:38.375459 master-0 kubenswrapper[4090]: E0309 16:23:38.373497 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deae457f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.358107453 +0000 UTC m=+0.533422452,LastTimestamp:2026-03-09 16:23:27.358107453 +0000 UTC m=+0.533422452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.375459 master-0 kubenswrapper[4090]: W0309 16:23:38.373607 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:38.375459 master-0 kubenswrapper[4090]: E0309 16:23:38.373929 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 09 16:23:38.375459 master-0 kubenswrapper[4090]: W0309 16:23:38.373667 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 09 16:23:38.375459 master-0 kubenswrapper[4090]: E0309 16:23:38.373963 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 09 16:23:38.379449 master-0 kubenswrapper[4090]: I0309 16:23:38.377315 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:38.379449 master-0 kubenswrapper[4090]: E0309 16:23:38.377533 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.379578 master-0 kubenswrapper[4090]: E0309 16:23:38.379401 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.385461 master-0 kubenswrapper[4090]: E0309 16:23:38.385080 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.399341 master-0 kubenswrapper[4090]: E0309 16:23:38.399211 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb5b84f64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.483072356 +0000 UTC m=+0.658387355,LastTimestamp:2026-03-09 16:23:27.483072356 +0000 UTC m=+0.658387355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.410046 master-0 kubenswrapper[4090]: E0309 16:23:38.409802 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.580838541 +0000 UTC m=+0.756153530,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.416968 master-0 kubenswrapper[4090]: E0309 16:23:38.416823 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.580857991 +0000 UTC m=+0.756172980,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.427648 master-0 kubenswrapper[4090]: E0309 16:23:38.427520 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1df14a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.580866241 +0000 UTC m=+0.756181230,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.434447 master-0 kubenswrapper[4090]: E0309 16:23:38.434241 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.610230143 +0000 UTC m=+0.785545132,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.440008 master-0 kubenswrapper[4090]: E0309 16:23:38.439845 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.610257883 +0000 UTC m=+0.785572882,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.445373 master-0 kubenswrapper[4090]: E0309 16:23:38.444567 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1df14a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.610267843 +0000 UTC m=+0.785582842,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.452735 master-0 kubenswrapper[4090]: E0309 16:23:38.452564 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.611677447 +0000 UTC m=+0.786992436,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.457804 master-0 kubenswrapper[4090]: E0309 16:23:38.457695 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.611690317 +0000 UTC m=+0.787005306,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.468373 master-0 kubenswrapper[4090]: E0309 16:23:38.467580 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1df14a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.611700288 +0000 UTC m=+0.787015277,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.475002 master-0 kubenswrapper[4090]: E0309 16:23:38.474898 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.613312695 +0000 UTC m=+0.788627684,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.479189 master-0 kubenswrapper[4090]: E0309 16:23:38.479025 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.613324705 +0000 UTC m=+0.788639694,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.483158 master-0 kubenswrapper[4090]: E0309 16:23:38.483045 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1df14a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.613334195 +0000 UTC m=+0.788649184,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.488546 master-0 kubenswrapper[4090]: E0309 16:23:38.488464 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.613511377 +0000 UTC m=+0.788826366,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.505757 master-0 kubenswrapper[4090]: E0309 16:23:38.505612 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.613529027 +0000 UTC m=+0.788844026,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.517585 master-0 kubenswrapper[4090]: E0309 16:23:38.517331 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1df14a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.613539807 +0000 UTC m=+0.788854806,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.523867 master-0 kubenswrapper[4090]: E0309 16:23:38.523735 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.613645328 +0000 UTC m=+0.788960347,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.529246 master-0 kubenswrapper[4090]: E0309 16:23:38.529139 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.613679458 +0000 UTC m=+0.788994487,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.534134 master-0 kubenswrapper[4090]: E0309 16:23:38.534019 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1df14a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1df14a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418504353 +0000 UTC m=+0.593819342,LastTimestamp:2026-03-09 16:23:27.613713969 +0000 UTC m=+0.789028998,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.543828 master-0 kubenswrapper[4090]: E0309 16:23:38.543682 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1de964d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1de964d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418472013 +0000 UTC m=+0.593787002,LastTimestamp:2026-03-09 16:23:27.614173503 +0000 UTC m=+0.789488492,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.550276 master-0 kubenswrapper[4090]: E0309 16:23:38.550110 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189b38deb1deef7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189b38deb1deef7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:27.418494843 +0000 UTC m=+0.593809832,LastTimestamp:2026-03-09 16:23:27.614195423 +0000 UTC m=+0.789510412,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.556315 master-0 kubenswrapper[4090]: E0309 16:23:38.556160 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38def828cffe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:28.597741566 +0000 UTC m=+1.773056555,LastTimestamp:2026-03-09 16:23:28.597741566 +0000 UTC m=+1.773056555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.569455 master-0 kubenswrapper[4090]: E0309 16:23:38.569281 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38def82a140f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:28.597824527 +0000 UTC m=+1.773139516,LastTimestamp:2026-03-09 16:23:28.597824527 +0000 UTC m=+1.773139516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.576803 master-0 kubenswrapper[4090]: E0309 16:23:38.576648 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189b38def836b286 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:28.598651526 +0000 UTC m=+1.773966515,LastTimestamp:2026-03-09 16:23:28.598651526 +0000 UTC m=+1.773966515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.581844 master-0 kubenswrapper[4090]: E0309 16:23:38.581627 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38def97c6ea5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:28.619998885 +0000 UTC m=+1.795313874,LastTimestamp:2026-03-09 16:23:28.619998885 +0000 UTC m=+1.795313874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.601919 master-0 kubenswrapper[4090]: E0309 16:23:38.600934 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38defb3daf7c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:28.649441148 +0000 UTC m=+1.824756137,LastTimestamp:2026-03-09 16:23:28.649441148 +0000 UTC m=+1.824756137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.608829 master-0 kubenswrapper[4090]: E0309 16:23:38.608530 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfcf8f0f01 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 3.613s (3.613s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.211543809 +0000 UTC m=+5.386858798,LastTimestamp:2026-03-09 16:23:32.211543809 +0000 UTC m=+5.386858798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.614813 master-0 kubenswrapper[4090]: E0309 16:23:38.614596 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38dfd1680572 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 3.622s (3.622s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.24253989 +0000 UTC m=+5.417854889,LastTimestamp:2026-03-09 16:23:32.24253989 +0000 UTC m=+5.417854889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.619703 master-0 kubenswrapper[4090]: E0309 16:23:38.619562 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfdb1c82e5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.405363429 +0000 UTC m=+5.580678418,LastTimestamp:2026-03-09 16:23:32.405363429 +0000 UTC m=+5.580678418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.624326 master-0 kubenswrapper[4090]: E0309 16:23:38.624168 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38dfdb227318 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.4057526 +0000 UTC m=+5.581067589,LastTimestamp:2026-03-09 16:23:32.4057526 +0000 UTC m=+5.581067589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.629730 master-0 kubenswrapper[4090]: E0309 16:23:38.629497 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38dfdbda4094 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.417798292 +0000 UTC m=+5.593113281,LastTimestamp:2026-03-09 16:23:32.417798292 +0000 UTC m=+5.593113281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.633665 master-0 kubenswrapper[4090]: E0309 16:23:38.633580 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38dfdbff3f8d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.420222861 +0000 UTC m=+5.595537850,LastTimestamp:2026-03-09 16:23:32.420222861 +0000 UTC m=+5.595537850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.637792 master-0 kubenswrapper[4090]: E0309 16:23:38.637639 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfdc12551f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.421473567 +0000 UTC m=+5.596788556,LastTimestamp:2026-03-09 16:23:32.421473567 +0000 UTC m=+5.596788556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.642364 master-0 kubenswrapper[4090]: E0309 16:23:38.642005 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfe2bcc7d2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.533307346 +0000 UTC m=+5.708622335,LastTimestamp:2026-03-09 16:23:32.533307346 +0000 UTC m=+5.708622335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.647055 master-0 kubenswrapper[4090]: E0309 16:23:38.646916 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38dfe981a217 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.646871575 +0000 UTC m=+5.822186564,LastTimestamp:2026-03-09 16:23:32.646871575 +0000 UTC m=+5.822186564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.651866 master-0 kubenswrapper[4090]: E0309 16:23:38.651393 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b38dfea376523 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.658783523 +0000 UTC m=+5.834098512,LastTimestamp:2026-03-09 16:23:32.658783523 +0000 UTC m=+5.834098512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.656912 master-0 kubenswrapper[4090]: E0309 16:23:38.656730 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfeea48718 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.733044504 +0000 UTC m=+5.908359493,LastTimestamp:2026-03-09 16:23:32.733044504 +0000 UTC m=+5.908359493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.661840 master-0 kubenswrapper[4090]: E0309 16:23:38.661645 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfef6855d5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.745876949 +0000 UTC m=+5.921191958,LastTimestamp:2026-03-09 16:23:32.745876949 +0000 UTC m=+5.921191958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.667870 master-0 kubenswrapper[4090]: E0309 16:23:38.667737 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38dfe2bcc7d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfe2bcc7d2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.533307346 +0000 UTC m=+5.708622335,LastTimestamp:2026-03-09 16:23:35.633584759 +0000 UTC m=+8.808899748,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.672704 master-0 kubenswrapper[4090]: E0309 16:23:38.672581 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189b38e0a02bd99f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.112s (7.112s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.711480223 +0000 UTC m=+8.886795212,LastTimestamp:2026-03-09 16:23:35.711480223 +0000 UTC m=+8.886795212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.677777 master-0 kubenswrapper[4090]: E0309 16:23:38.677626 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0a2b2cbb0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.156s (7.156s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.753878448 +0000 UTC m=+8.929193437,LastTimestamp:2026-03-09 16:23:35.753878448 +0000 UTC m=+8.929193437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.683509 master-0 kubenswrapper[4090]: E0309 16:23:38.683053 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e0a2dc5690 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.107s (7.107s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.756600976 +0000 UTC m=+8.931915965,LastTimestamp:2026-03-09 16:23:35.756600976 +0000 UTC m=+8.931915965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.689298 master-0 kubenswrapper[4090]: E0309 16:23:38.689103 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38dfeea48718\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfeea48718 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.733044504 +0000 UTC m=+5.908359493,LastTimestamp:2026-03-09 16:23:35.812466584 +0000 UTC m=+8.987781573,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.714639 master-0 kubenswrapper[4090]: E0309 16:23:38.714519 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38dfef6855d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfef6855d5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.745876949 +0000 UTC m=+5.921191958,LastTimestamp:2026-03-09 16:23:35.831618969 +0000 UTC m=+9.006933948,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.720177 master-0 kubenswrapper[4090]: E0309 16:23:38.720061 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189b38e0b004831f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.977337631 +0000 UTC m=+9.152652630,LastTimestamp:2026-03-09 16:23:35.977337631 +0000 UTC m=+9.152652630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.725051 master-0 kubenswrapper[4090]: E0309 16:23:38.724928 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e0b0b7d0db kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.989088475 +0000 UTC m=+9.164403474,LastTimestamp:2026-03-09 16:23:35.989088475 +0000 UTC m=+9.164403474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.729364 master-0 kubenswrapper[4090]: E0309 16:23:38.729271 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189b38e0b0cf081c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.990609948 +0000 UTC m=+9.165924947,LastTimestamp:2026-03-09 16:23:35.990609948 +0000 UTC m=+9.165924947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.735017 master-0 kubenswrapper[4090]: E0309 16:23:38.734851 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0b142ce87 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:35.998197383 +0000 UTC m=+9.173512382,LastTimestamp:2026-03-09 16:23:35.998197383 +0000 UTC m=+9.173512382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.740960 master-0 kubenswrapper[4090]: E0309 16:23:38.740774 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e0b18647ae kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.00261931 +0000 UTC m=+9.177934309,LastTimestamp:2026-03-09 16:23:36.00261931 +0000 UTC m=+9.177934309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.748112 master-0 kubenswrapper[4090]: E0309 16:23:38.746645 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e0b197989d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.003754141 +0000 UTC m=+9.179069150,LastTimestamp:2026-03-09 16:23:36.003754141 +0000 UTC m=+9.179069150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.761316 master-0 kubenswrapper[4090]: E0309 16:23:38.761157 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0b21c68c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.012458179 +0000 UTC m=+9.187773178,LastTimestamp:2026-03-09 16:23:36.012458179 +0000 UTC m=+9.187773178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.767579 master-0 kubenswrapper[4090]: E0309 16:23:38.767464 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0d24b3998 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.552397208 +0000 UTC m=+9.727712187,LastTimestamp:2026-03-09 16:23:36.552397208 +0000 UTC m=+9.727712187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.773735 master-0 kubenswrapper[4090]: E0309 16:23:38.773626 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38e0d268c02b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.554332203 +0000 UTC m=+9.729647202,LastTimestamp:2026-03-09 16:23:36.554332203 +0000 UTC m=+9.729647202,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.780022 master-0 kubenswrapper[4090]: E0309 16:23:38.779834 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0e0de6733 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.796923699 +0000 UTC m=+9.972238688,LastTimestamp:2026-03-09 16:23:36.796923699 +0000 UTC m=+9.972238688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.785816 master-0 kubenswrapper[4090]: E0309 16:23:38.785683 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0e17c329b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.807264923 +0000 UTC m=+9.982579912,LastTimestamp:2026-03-09 16:23:36.807264923 +0000 UTC m=+9.982579912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.791229 master-0 kubenswrapper[4090]: E0309 16:23:38.791082 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e0e1979097 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.809058455 +0000 UTC m=+9.984373444,LastTimestamp:2026-03-09 16:23:36.809058455 +0000 UTC m=+9.984373444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.796960 master-0 kubenswrapper[4090]: E0309 16:23:38.796810 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38e0d268c02b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38e0d268c02b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.554332203 +0000 UTC m=+9.729647202,LastTimestamp:2026-03-09 16:23:37.572416884 +0000 UTC m=+10.747731893,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.802579 master-0 kubenswrapper[4090]: E0309 16:23:38.802412 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e143c141b4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 2.452s (2.452s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:38.45595794 +0000 UTC m=+11.631272929,LastTimestamp:2026-03-09 16:23:38.45595794 +0000 UTC m=+11.631272929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.807863 master-0 kubenswrapper[4090]: E0309 16:23:38.807739 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e150fab5ac kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:38.677826988 +0000 UTC m=+11.853141977,LastTimestamp:2026-03-09 16:23:38.677826988 +0000 UTC m=+11.853141977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:38.829251 master-0 kubenswrapper[4090]: E0309 16:23:38.829069 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b38e1532b1780 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:38.714552192 +0000 UTC m=+11.889867181,LastTimestamp:2026-03-09 16:23:38.714552192 +0000 UTC m=+11.889867181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:39.374295 master-0 kubenswrapper[4090]: I0309 16:23:39.374193 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:39.577489 master-0 kubenswrapper[4090]: I0309 16:23:39.577322 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613"} Mar 09 16:23:39.577489 master-0 kubenswrapper[4090]: I0309 16:23:39.577445 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:39.578179 master-0 kubenswrapper[4090]: I0309 16:23:39.578152 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:39.578179 master-0 kubenswrapper[4090]: I0309 16:23:39.578176 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:39.578258 master-0 kubenswrapper[4090]: I0309 16:23:39.578185 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:39.606245 master-0 kubenswrapper[4090]: E0309 16:23:39.606112 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e188137eaa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 2.793s (2.793s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:39.602198186 +0000 UTC m=+12.777513175,LastTimestamp:2026-03-09 16:23:39.602198186 +0000 UTC m=+12.777513175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:39.798082 master-0 kubenswrapper[4090]: E0309 16:23:39.797915 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e193494e4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:39.790274122 +0000 UTC m=+12.965589111,LastTimestamp:2026-03-09 16:23:39.790274122 +0000 UTC m=+12.965589111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:39.845722 master-0 kubenswrapper[4090]: E0309 16:23:39.845499 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189b38e196310d7d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:39.839016317 +0000 UTC m=+13.014331306,LastTimestamp:2026-03-09 16:23:39.839016317 +0000 UTC m=+13.014331306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:39.990653 master-0 kubenswrapper[4090]: E0309 16:23:39.990408 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 16:23:40.197707 master-0 kubenswrapper[4090]: I0309 16:23:40.197538 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:40.199325 master-0 kubenswrapper[4090]: I0309 16:23:40.199259 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:40.199325 master-0 kubenswrapper[4090]: I0309 16:23:40.199305 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:40.199325 master-0 kubenswrapper[4090]: I0309 16:23:40.199318 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:40.199724 master-0 kubenswrapper[4090]: I0309 16:23:40.199400 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:40.205471 master-0 kubenswrapper[4090]: E0309 16:23:40.205348 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 09 16:23:40.353286 master-0 kubenswrapper[4090]: I0309 16:23:40.353078 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:40.371172 master-0 kubenswrapper[4090]: I0309 16:23:40.371086 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:40.583362 master-0 kubenswrapper[4090]: I0309 16:23:40.583307 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:40.583844 master-0 kubenswrapper[4090]: I0309 16:23:40.583407 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:40.583844 master-0 kubenswrapper[4090]: I0309 16:23:40.583293 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e"} Mar 09 16:23:40.584209 master-0 kubenswrapper[4090]: I0309 16:23:40.584171 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:40.584241 master-0 kubenswrapper[4090]: I0309 16:23:40.584220 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:40.584241 master-0 kubenswrapper[4090]: I0309 16:23:40.584238 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:40.584918 master-0 kubenswrapper[4090]: I0309 16:23:40.584876 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:40.584966 master-0 kubenswrapper[4090]: I0309 16:23:40.584951 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:40.584997 master-0 kubenswrapper[4090]: I0309 16:23:40.584980 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:40.991691 master-0 kubenswrapper[4090]: I0309 16:23:40.991596 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:40.998308 master-0 kubenswrapper[4090]: I0309 16:23:40.998245 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:41.371911 master-0 kubenswrapper[4090]: I0309 16:23:41.371729 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:41.586530 master-0 kubenswrapper[4090]: I0309 16:23:41.586416 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:41.586530 master-0 kubenswrapper[4090]: I0309 16:23:41.586470 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:41.587687 master-0 kubenswrapper[4090]: I0309 16:23:41.586585 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:41.587687 master-0 kubenswrapper[4090]: I0309 16:23:41.587560 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:41.587687 master-0 kubenswrapper[4090]: I0309 16:23:41.587641 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:41.587687 master-0 kubenswrapper[4090]: I0309 16:23:41.587647 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:41.587687 master-0 kubenswrapper[4090]: I0309 16:23:41.587665 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:41.588086 master-0 kubenswrapper[4090]: I0309 16:23:41.587671 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:41.588086 master-0 kubenswrapper[4090]: I0309 16:23:41.587856 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:41.747615 master-0 kubenswrapper[4090]: I0309 16:23:41.747497 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 09 16:23:41.779260 master-0 kubenswrapper[4090]: I0309 16:23:41.779191 4090 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 09 16:23:42.365671 master-0 kubenswrapper[4090]: I0309 16:23:42.365557 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:42.589094 master-0 kubenswrapper[4090]: I0309 16:23:42.588995 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:42.590223 master-0 kubenswrapper[4090]: I0309 16:23:42.590164 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:42.590339 master-0 kubenswrapper[4090]: I0309 16:23:42.590230 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:42.590339 master-0 kubenswrapper[4090]: I0309 16:23:42.590254 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:42.959701 master-0 kubenswrapper[4090]: I0309 16:23:42.959583 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:42.959978 master-0 kubenswrapper[4090]: I0309 16:23:42.959819 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:42.961633 master-0 kubenswrapper[4090]: I0309 16:23:42.961567 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:42.961633 master-0 kubenswrapper[4090]: I0309 16:23:42.961606 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:42.961633 master-0 kubenswrapper[4090]: I0309 16:23:42.961619 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:43.367762 master-0 kubenswrapper[4090]: I0309 16:23:43.367619 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:43.969853 master-0 kubenswrapper[4090]: I0309 16:23:43.969729 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:43.971059 master-0 kubenswrapper[4090]: I0309 16:23:43.969975 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:43.971786 master-0 kubenswrapper[4090]: I0309 16:23:43.971725 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:43.971786 master-0 kubenswrapper[4090]: I0309 16:23:43.971771 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:43.971786 master-0 kubenswrapper[4090]: I0309 16:23:43.971787 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:43.975278 master-0 kubenswrapper[4090]: I0309 16:23:43.975205 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:44.082078 master-0 kubenswrapper[4090]: W0309 16:23:44.082006 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 09 16:23:44.082332 master-0 kubenswrapper[4090]: E0309 16:23:44.082089 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 09 16:23:44.366018 master-0 kubenswrapper[4090]: I0309 16:23:44.365818 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:44.593298 master-0 kubenswrapper[4090]: I0309 16:23:44.593232 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:44.594125 master-0 kubenswrapper[4090]: I0309 16:23:44.594062 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:44.594125 master-0 kubenswrapper[4090]: I0309 16:23:44.594108 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:44.594125 master-0 kubenswrapper[4090]: I0309 16:23:44.594121 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:45.189453 master-0 kubenswrapper[4090]: I0309 16:23:45.189015 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:45.190272 master-0 kubenswrapper[4090]: I0309 16:23:45.189585 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:45.190834 master-0 kubenswrapper[4090]: I0309 16:23:45.190780 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:45.190917 master-0 kubenswrapper[4090]: I0309 16:23:45.190840 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:45.190917 master-0 kubenswrapper[4090]: I0309 16:23:45.190861 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:45.195578 master-0 kubenswrapper[4090]: I0309 16:23:45.195524 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:45.366511 master-0 kubenswrapper[4090]: I0309 16:23:45.366405 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:45.597513 master-0 kubenswrapper[4090]: I0309 16:23:45.597384 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:45.598350 master-0 kubenswrapper[4090]: I0309 16:23:45.598296 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:45.598444 master-0 kubenswrapper[4090]: I0309 16:23:45.598353 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:45.598444 master-0 kubenswrapper[4090]: I0309 16:23:45.598375 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:45.605480 master-0 kubenswrapper[4090]: I0309 16:23:45.605405 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:23:46.088410 master-0 kubenswrapper[4090]: I0309 16:23:46.088303 4090 csr.go:261] certificate signing request csr-gw6gr is approved, waiting to be issued Mar 09 16:23:46.216748 master-0 kubenswrapper[4090]: W0309 16:23:46.216678 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 09 16:23:46.217197 master-0 kubenswrapper[4090]: E0309 16:23:46.216748 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 09 16:23:46.365496 master-0 kubenswrapper[4090]: I0309 16:23:46.365306 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:46.600149 master-0 kubenswrapper[4090]: I0309 16:23:46.599756 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:46.600516 master-0 kubenswrapper[4090]: I0309 16:23:46.600414 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:46.600516 master-0 kubenswrapper[4090]: I0309 16:23:46.600473 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:46.600516 master-0 kubenswrapper[4090]: I0309 16:23:46.600482 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:46.996168 master-0 kubenswrapper[4090]: E0309 16:23:46.996113 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 16:23:47.206660 master-0 kubenswrapper[4090]: I0309 16:23:47.206564 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:47.208281 master-0 kubenswrapper[4090]: I0309 16:23:47.208230 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:47.208281 master-0 kubenswrapper[4090]: I0309 16:23:47.208276 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:47.208403 master-0 kubenswrapper[4090]: I0309 16:23:47.208318 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:47.208403 master-0 kubenswrapper[4090]: I0309 16:23:47.208399 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:47.216332 master-0 kubenswrapper[4090]: E0309 16:23:47.216244 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 09 16:23:47.367723 master-0 kubenswrapper[4090]: I0309 16:23:47.367561 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:47.483541 master-0 kubenswrapper[4090]: E0309 16:23:47.483464 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 09 16:23:48.364987 master-0 kubenswrapper[4090]: I0309 16:23:48.364919 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:49.302356 master-0 kubenswrapper[4090]: I0309 16:23:49.302306 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:49.303387 master-0 kubenswrapper[4090]: I0309 16:23:49.302989 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:49.304075 master-0 kubenswrapper[4090]: I0309 16:23:49.304036 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:49.304136 master-0 kubenswrapper[4090]: I0309 16:23:49.304085 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:49.304136 master-0 kubenswrapper[4090]: I0309 16:23:49.304098 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:49.307333 master-0 kubenswrapper[4090]: I0309 16:23:49.307284 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:23:49.366177 master-0 kubenswrapper[4090]: I0309 16:23:49.366089 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:49.558857 master-0 kubenswrapper[4090]: W0309 16:23:49.558698 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:49.558857 master-0 kubenswrapper[4090]: E0309 16:23:49.558764 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 09 16:23:49.605274 master-0 kubenswrapper[4090]: I0309 16:23:49.605227 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:49.607074 master-0 kubenswrapper[4090]: I0309 16:23:49.607044 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:49.607074 master-0 kubenswrapper[4090]: I0309 16:23:49.607075 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:49.607312 master-0 kubenswrapper[4090]: I0309 16:23:49.607083 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:50.366616 master-0 kubenswrapper[4090]: I0309 16:23:50.366534 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:50.508122 master-0 kubenswrapper[4090]: I0309 16:23:50.508048 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:50.509146 master-0 kubenswrapper[4090]: I0309 16:23:50.509071 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:50.509146 master-0 kubenswrapper[4090]: I0309 16:23:50.509144 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:50.509309 master-0 kubenswrapper[4090]: I0309 16:23:50.509187 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:50.509504 master-0 kubenswrapper[4090]: I0309 16:23:50.509482 4090 scope.go:117] "RemoveContainer" containerID="34ba6f4206d152cbab665ff7e5cdbee73846c546c16f056560194b7119321fa3" Mar 09 16:23:50.517735 master-0 kubenswrapper[4090]: E0309 16:23:50.517611 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38dfe2bcc7d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfe2bcc7d2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.533307346 +0000 UTC m=+5.708622335,LastTimestamp:2026-03-09 16:23:50.512091646 +0000 UTC m=+23.687406635,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:50.730950 master-0 kubenswrapper[4090]: E0309 16:23:50.730748 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38dfeea48718\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfeea48718 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.733044504 +0000 UTC m=+5.908359493,LastTimestamp:2026-03-09 16:23:50.725874723 +0000 UTC m=+23.901189712,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:50.744784 master-0 kubenswrapper[4090]: E0309 16:23:50.744656 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38dfef6855d5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38dfef6855d5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:32.745876949 +0000 UTC m=+5.921191958,LastTimestamp:2026-03-09 16:23:50.739338886 +0000 UTC m=+23.914653865,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:50.966508 master-0 kubenswrapper[4090]: W0309 16:23:50.966449 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 09 16:23:50.966914 master-0 kubenswrapper[4090]: E0309 16:23:50.966528 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 09 16:23:51.367221 master-0 kubenswrapper[4090]: I0309 16:23:51.367024 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:51.611542 master-0 kubenswrapper[4090]: I0309 16:23:51.611477 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 09 16:23:51.612114 master-0 kubenswrapper[4090]: I0309 16:23:51.612078 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 09 16:23:51.612733 master-0 kubenswrapper[4090]: I0309 16:23:51.612693 4090 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482" exitCode=1 Mar 09 16:23:51.612774 master-0 kubenswrapper[4090]: I0309 16:23:51.612737 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482"} Mar 09 16:23:51.612813 master-0 kubenswrapper[4090]: I0309 16:23:51.612778 4090 scope.go:117] "RemoveContainer" containerID="34ba6f4206d152cbab665ff7e5cdbee73846c546c16f056560194b7119321fa3" Mar 09 16:23:51.612950 master-0 kubenswrapper[4090]: I0309 16:23:51.612920 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:51.613890 master-0 kubenswrapper[4090]: I0309 16:23:51.613857 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:51.613937 master-0 kubenswrapper[4090]: I0309 16:23:51.613894 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:51.613937 master-0 kubenswrapper[4090]: I0309 16:23:51.613908 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:51.614242 master-0 kubenswrapper[4090]: I0309 16:23:51.614211 4090 scope.go:117] "RemoveContainer" containerID="858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482" Mar 09 16:23:51.614434 master-0 kubenswrapper[4090]: E0309 16:23:51.614374 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 09 16:23:51.620861 master-0 kubenswrapper[4090]: E0309 16:23:51.620673 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189b38e0d268c02b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189b38e0d268c02b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:23:36.554332203 +0000 UTC m=+9.729647202,LastTimestamp:2026-03-09 16:23:51.614352121 +0000 UTC m=+24.789667120,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:23:52.367348 master-0 kubenswrapper[4090]: I0309 16:23:52.367250 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:52.619241 master-0 kubenswrapper[4090]: I0309 16:23:52.619058 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 09 16:23:53.367261 master-0 kubenswrapper[4090]: I0309 16:23:53.367024 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:54.005325 master-0 kubenswrapper[4090]: E0309 16:23:54.005262 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 09 16:23:54.217187 master-0 kubenswrapper[4090]: I0309 16:23:54.217101 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:23:54.219389 master-0 kubenswrapper[4090]: I0309 16:23:54.219337 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:23:54.219519 master-0 kubenswrapper[4090]: I0309 16:23:54.219406 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:23:54.219519 master-0 kubenswrapper[4090]: I0309 16:23:54.219459 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:23:54.219591 master-0 kubenswrapper[4090]: I0309 16:23:54.219559 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:23:54.227595 master-0 kubenswrapper[4090]: E0309 16:23:54.227485 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 09 16:23:54.369779 master-0 kubenswrapper[4090]: I0309 16:23:54.369609 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:55.366346 master-0 kubenswrapper[4090]: I0309 16:23:55.366241 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:56.376450 master-0 kubenswrapper[4090]: I0309 16:23:56.376197 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 09 16:23:56.415027 master-0 kubenswrapper[4090]: I0309 16:23:56.414958 4090 csr.go:257] certificate signing request csr-gw6gr is issued Mar 09 16:23:57.259784 master-0 kubenswrapper[4090]: I0309 16:23:57.259714 4090 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 09 16:23:57.370252 master-0 kubenswrapper[4090]: I0309 16:23:57.370185 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:57.388905 master-0 kubenswrapper[4090]: I0309 16:23:57.388842 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:57.416226 master-0 kubenswrapper[4090]: I0309 16:23:57.416153 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 09:46:26.715687434 +0000 UTC Mar 09 16:23:57.416226 master-0 kubenswrapper[4090]: I0309 16:23:57.416205 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h22m29.299486521s for next certificate rotation Mar 09 16:23:57.446093 master-0 kubenswrapper[4090]: I0309 16:23:57.446023 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:57.484523 master-0 kubenswrapper[4090]: E0309 16:23:57.484467 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 09 16:23:57.718047 master-0 kubenswrapper[4090]: I0309 16:23:57.717987 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:57.718047 master-0 kubenswrapper[4090]: E0309 16:23:57.718026 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 09 16:23:57.739794 master-0 kubenswrapper[4090]: I0309 16:23:57.739742 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:57.789371 master-0 kubenswrapper[4090]: I0309 16:23:57.789300 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:57.845037 master-0 kubenswrapper[4090]: I0309 16:23:57.844962 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:58.118690 master-0 kubenswrapper[4090]: I0309 16:23:58.118515 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:58.118690 master-0 kubenswrapper[4090]: E0309 16:23:58.118582 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 09 16:23:58.215443 master-0 kubenswrapper[4090]: I0309 16:23:58.215350 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:58.231503 master-0 kubenswrapper[4090]: I0309 16:23:58.231405 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:58.290665 master-0 kubenswrapper[4090]: I0309 16:23:58.290614 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:58.549544 master-0 kubenswrapper[4090]: I0309 16:23:58.549462 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:58.549544 master-0 kubenswrapper[4090]: E0309 16:23:58.549519 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 09 16:23:59.129477 master-0 kubenswrapper[4090]: I0309 16:23:59.129378 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:59.146444 master-0 kubenswrapper[4090]: I0309 16:23:59.146352 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:59.215121 master-0 kubenswrapper[4090]: I0309 16:23:59.215062 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:59.478442 master-0 kubenswrapper[4090]: I0309 16:23:59.478367 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 09 16:23:59.478442 master-0 kubenswrapper[4090]: E0309 16:23:59.478408 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 09 16:24:01.013525 master-0 kubenswrapper[4090]: E0309 16:24:01.013462 4090 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 09 16:24:01.228380 master-0 kubenswrapper[4090]: I0309 16:24:01.228294 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:24:01.229484 master-0 kubenswrapper[4090]: I0309 16:24:01.229411 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:24:01.229484 master-0 kubenswrapper[4090]: I0309 16:24:01.229468 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:24:01.229484 master-0 kubenswrapper[4090]: I0309 16:24:01.229476 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:24:01.229962 master-0 kubenswrapper[4090]: I0309 16:24:01.229531 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:24:01.240519 master-0 kubenswrapper[4090]: I0309 16:24:01.240444 4090 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 09 16:24:01.240519 master-0 kubenswrapper[4090]: E0309 16:24:01.240494 4090 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 09 16:24:01.257341 master-0 kubenswrapper[4090]: E0309 16:24:01.257269 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.358382 master-0 kubenswrapper[4090]: E0309 16:24:01.358220 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.385200 master-0 kubenswrapper[4090]: I0309 16:24:01.385094 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 09 16:24:01.405038 master-0 kubenswrapper[4090]: I0309 16:24:01.404973 4090 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 09 16:24:01.459358 master-0 kubenswrapper[4090]: E0309 16:24:01.459262 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.559881 master-0 kubenswrapper[4090]: E0309 16:24:01.559647 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.660889 master-0 kubenswrapper[4090]: E0309 16:24:01.660637 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.761704 master-0 kubenswrapper[4090]: E0309 16:24:01.761414 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.862684 master-0 kubenswrapper[4090]: E0309 16:24:01.862565 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:01.963250 master-0 kubenswrapper[4090]: E0309 16:24:01.963163 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.063937 master-0 kubenswrapper[4090]: E0309 16:24:02.063851 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.164127 master-0 kubenswrapper[4090]: E0309 16:24:02.164051 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.264896 master-0 kubenswrapper[4090]: E0309 16:24:02.264761 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.365316 master-0 kubenswrapper[4090]: E0309 16:24:02.365216 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.466045 master-0 kubenswrapper[4090]: E0309 16:24:02.465948 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.631595 master-0 kubenswrapper[4090]: E0309 16:24:02.567213 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.667770 master-0 kubenswrapper[4090]: E0309 16:24:02.667695 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.768709 master-0 kubenswrapper[4090]: E0309 16:24:02.768617 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.869221 master-0 kubenswrapper[4090]: E0309 16:24:02.868936 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:02.969354 master-0 kubenswrapper[4090]: E0309 16:24:02.969302 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.069461 master-0 kubenswrapper[4090]: E0309 16:24:03.069367 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.170510 master-0 kubenswrapper[4090]: E0309 16:24:03.170411 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.271371 master-0 kubenswrapper[4090]: E0309 16:24:03.271223 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.371847 master-0 kubenswrapper[4090]: E0309 16:24:03.371765 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.472656 master-0 kubenswrapper[4090]: E0309 16:24:03.472581 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.573282 master-0 kubenswrapper[4090]: E0309 16:24:03.573095 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.673552 master-0 kubenswrapper[4090]: E0309 16:24:03.673415 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.775648 master-0 kubenswrapper[4090]: E0309 16:24:03.775581 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.876530 master-0 kubenswrapper[4090]: E0309 16:24:03.876264 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:03.977238 master-0 kubenswrapper[4090]: E0309 16:24:03.977111 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.078194 master-0 kubenswrapper[4090]: E0309 16:24:04.078076 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.178916 master-0 kubenswrapper[4090]: E0309 16:24:04.178735 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.279453 master-0 kubenswrapper[4090]: E0309 16:24:04.279319 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.379994 master-0 kubenswrapper[4090]: E0309 16:24:04.379846 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.480648 master-0 kubenswrapper[4090]: E0309 16:24:04.480400 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.580910 master-0 kubenswrapper[4090]: E0309 16:24:04.580756 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.681287 master-0 kubenswrapper[4090]: E0309 16:24:04.681182 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.782493 master-0 kubenswrapper[4090]: E0309 16:24:04.782170 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.882789 master-0 kubenswrapper[4090]: E0309 16:24:04.882622 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:04.983627 master-0 kubenswrapper[4090]: E0309 16:24:04.983462 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.084655 master-0 kubenswrapper[4090]: E0309 16:24:05.084368 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.185368 master-0 kubenswrapper[4090]: E0309 16:24:05.185269 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.285582 master-0 kubenswrapper[4090]: E0309 16:24:05.285481 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.386794 master-0 kubenswrapper[4090]: E0309 16:24:05.386644 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.487523 master-0 kubenswrapper[4090]: E0309 16:24:05.487413 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.588605 master-0 kubenswrapper[4090]: E0309 16:24:05.588487 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.689598 master-0 kubenswrapper[4090]: E0309 16:24:05.689475 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.790561 master-0 kubenswrapper[4090]: E0309 16:24:05.790409 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.891379 master-0 kubenswrapper[4090]: E0309 16:24:05.891287 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:05.992228 master-0 kubenswrapper[4090]: E0309 16:24:05.991959 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.092904 master-0 kubenswrapper[4090]: E0309 16:24:06.092762 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.193120 master-0 kubenswrapper[4090]: E0309 16:24:06.193001 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.293492 master-0 kubenswrapper[4090]: E0309 16:24:06.293207 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.394391 master-0 kubenswrapper[4090]: E0309 16:24:06.394232 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.495179 master-0 kubenswrapper[4090]: E0309 16:24:06.495038 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.507994 master-0 kubenswrapper[4090]: I0309 16:24:06.507897 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:24:06.509447 master-0 kubenswrapper[4090]: I0309 16:24:06.509377 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:24:06.509540 master-0 kubenswrapper[4090]: I0309 16:24:06.509472 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:24:06.509540 master-0 kubenswrapper[4090]: I0309 16:24:06.509487 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:24:06.510218 master-0 kubenswrapper[4090]: I0309 16:24:06.510175 4090 scope.go:117] "RemoveContainer" containerID="858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482" Mar 09 16:24:06.510509 master-0 kubenswrapper[4090]: E0309 16:24:06.510466 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 09 16:24:06.596263 master-0 kubenswrapper[4090]: E0309 16:24:06.596080 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.697407 master-0 kubenswrapper[4090]: E0309 16:24:06.697315 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.798175 master-0 kubenswrapper[4090]: E0309 16:24:06.798102 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.899155 master-0 kubenswrapper[4090]: E0309 16:24:06.899002 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:06.916659 master-0 kubenswrapper[4090]: I0309 16:24:06.916598 4090 csr.go:261] certificate signing request csr-h75kr is approved, waiting to be issued Mar 09 16:24:06.925577 master-0 kubenswrapper[4090]: I0309 16:24:06.925546 4090 csr.go:257] certificate signing request csr-h75kr is issued Mar 09 16:24:07.000257 master-0 kubenswrapper[4090]: E0309 16:24:07.000177 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.101076 master-0 kubenswrapper[4090]: E0309 16:24:07.101020 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.201829 master-0 kubenswrapper[4090]: E0309 16:24:07.201750 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.302567 master-0 kubenswrapper[4090]: E0309 16:24:07.302456 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.403690 master-0 kubenswrapper[4090]: E0309 16:24:07.403611 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.485576 master-0 kubenswrapper[4090]: E0309 16:24:07.485370 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 09 16:24:07.504681 master-0 kubenswrapper[4090]: E0309 16:24:07.504590 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.605010 master-0 kubenswrapper[4090]: E0309 16:24:07.604922 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.705833 master-0 kubenswrapper[4090]: E0309 16:24:07.705719 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.806987 master-0 kubenswrapper[4090]: E0309 16:24:07.806709 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.906970 master-0 kubenswrapper[4090]: E0309 16:24:07.906886 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:07.927295 master-0 kubenswrapper[4090]: I0309 16:24:07.927169 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 09:46:20.046971711 +0000 UTC Mar 09 16:24:07.927295 master-0 kubenswrapper[4090]: I0309 16:24:07.927251 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h22m12.119723999s for next certificate rotation Mar 09 16:24:08.007720 master-0 kubenswrapper[4090]: E0309 16:24:08.007618 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.108645 master-0 kubenswrapper[4090]: E0309 16:24:08.108452 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.209238 master-0 kubenswrapper[4090]: E0309 16:24:08.209151 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.310404 master-0 kubenswrapper[4090]: E0309 16:24:08.310304 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.411354 master-0 kubenswrapper[4090]: E0309 16:24:08.411150 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.440274 master-0 kubenswrapper[4090]: I0309 16:24:08.440181 4090 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 09 16:24:08.511581 master-0 kubenswrapper[4090]: E0309 16:24:08.511493 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.611870 master-0 kubenswrapper[4090]: E0309 16:24:08.611778 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.712472 master-0 kubenswrapper[4090]: E0309 16:24:08.712328 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.735579 master-0 kubenswrapper[4090]: I0309 16:24:08.735475 4090 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 09 16:24:08.813165 master-0 kubenswrapper[4090]: E0309 16:24:08.813065 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.913671 master-0 kubenswrapper[4090]: E0309 16:24:08.913537 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:08.927973 master-0 kubenswrapper[4090]: I0309 16:24:08.927856 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 13:14:36.716323921 +0000 UTC Mar 09 16:24:08.927973 master-0 kubenswrapper[4090]: I0309 16:24:08.927905 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h50m27.788422353s for next certificate rotation Mar 09 16:24:09.014799 master-0 kubenswrapper[4090]: E0309 16:24:09.014650 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.115911 master-0 kubenswrapper[4090]: E0309 16:24:09.115819 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.216670 master-0 kubenswrapper[4090]: E0309 16:24:09.216577 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.317537 master-0 kubenswrapper[4090]: E0309 16:24:09.317315 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.417912 master-0 kubenswrapper[4090]: E0309 16:24:09.417825 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.518138 master-0 kubenswrapper[4090]: E0309 16:24:09.518018 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.618316 master-0 kubenswrapper[4090]: E0309 16:24:09.618147 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.719389 master-0 kubenswrapper[4090]: E0309 16:24:09.719301 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.819933 master-0 kubenswrapper[4090]: E0309 16:24:09.819834 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:09.920649 master-0 kubenswrapper[4090]: E0309 16:24:09.920493 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.021061 master-0 kubenswrapper[4090]: E0309 16:24:10.020949 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.121776 master-0 kubenswrapper[4090]: E0309 16:24:10.121639 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.222137 master-0 kubenswrapper[4090]: E0309 16:24:10.222069 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.323224 master-0 kubenswrapper[4090]: E0309 16:24:10.323157 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.424222 master-0 kubenswrapper[4090]: E0309 16:24:10.424138 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.524543 master-0 kubenswrapper[4090]: E0309 16:24:10.524321 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.624761 master-0 kubenswrapper[4090]: E0309 16:24:10.624645 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.725259 master-0 kubenswrapper[4090]: E0309 16:24:10.725174 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.826646 master-0 kubenswrapper[4090]: E0309 16:24:10.826355 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:10.927538 master-0 kubenswrapper[4090]: E0309 16:24:10.927338 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.028269 master-0 kubenswrapper[4090]: E0309 16:24:11.028195 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.129533 master-0 kubenswrapper[4090]: E0309 16:24:11.129246 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.230540 master-0 kubenswrapper[4090]: E0309 16:24:11.230400 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.331168 master-0 kubenswrapper[4090]: E0309 16:24:11.331096 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.338453 master-0 kubenswrapper[4090]: E0309 16:24:11.338368 4090 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 09 16:24:11.432099 master-0 kubenswrapper[4090]: E0309 16:24:11.432034 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.532704 master-0 kubenswrapper[4090]: E0309 16:24:11.532608 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.633408 master-0 kubenswrapper[4090]: E0309 16:24:11.633337 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.733805 master-0 kubenswrapper[4090]: E0309 16:24:11.733670 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.833887 master-0 kubenswrapper[4090]: E0309 16:24:11.833824 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:11.934232 master-0 kubenswrapper[4090]: E0309 16:24:11.934149 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.034993 master-0 kubenswrapper[4090]: E0309 16:24:12.034810 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.135702 master-0 kubenswrapper[4090]: E0309 16:24:12.135645 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.236618 master-0 kubenswrapper[4090]: E0309 16:24:12.236513 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.337878 master-0 kubenswrapper[4090]: E0309 16:24:12.337707 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.438265 master-0 kubenswrapper[4090]: E0309 16:24:12.438149 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.538605 master-0 kubenswrapper[4090]: E0309 16:24:12.538517 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.639513 master-0 kubenswrapper[4090]: E0309 16:24:12.639382 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 09 16:24:12.675266 master-0 kubenswrapper[4090]: I0309 16:24:12.675179 4090 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 09 16:24:13.372554 master-0 kubenswrapper[4090]: I0309 16:24:13.372483 4090 apiserver.go:52] "Watching apiserver" Mar 09 16:24:13.377279 master-0 kubenswrapper[4090]: I0309 16:24:13.377174 4090 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 09 16:24:13.377699 master-0 kubenswrapper[4090]: I0309 16:24:13.377615 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-rdwtz","openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk","openshift-network-operator/network-operator-7c649bf6d4-r82z7"] Mar 09 16:24:13.378197 master-0 kubenswrapper[4090]: I0309 16:24:13.378143 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.378197 master-0 kubenswrapper[4090]: I0309 16:24:13.378177 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.378521 master-0 kubenswrapper[4090]: I0309 16:24:13.378388 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.380768 master-0 kubenswrapper[4090]: I0309 16:24:13.380729 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 16:24:13.381268 master-0 kubenswrapper[4090]: I0309 16:24:13.381210 4090 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 09 16:24:13.382791 master-0 kubenswrapper[4090]: I0309 16:24:13.382754 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 16:24:13.383048 master-0 kubenswrapper[4090]: I0309 16:24:13.382988 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 09 16:24:13.383315 master-0 kubenswrapper[4090]: I0309 16:24:13.383276 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 09 16:24:13.383967 master-0 kubenswrapper[4090]: I0309 16:24:13.383925 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 09 16:24:13.383967 master-0 kubenswrapper[4090]: I0309 16:24:13.383969 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 09 16:24:13.384113 master-0 kubenswrapper[4090]: I0309 16:24:13.384007 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 16:24:13.384113 master-0 kubenswrapper[4090]: I0309 16:24:13.384022 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 09 16:24:13.385852 master-0 kubenswrapper[4090]: I0309 16:24:13.385817 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 09 16:24:13.467936 master-0 kubenswrapper[4090]: I0309 16:24:13.467875 4090 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 09 16:24:13.514226 master-0 kubenswrapper[4090]: I0309 16:24:13.514134 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.514226 master-0 kubenswrapper[4090]: I0309 16:24:13.514197 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.514226 master-0 kubenswrapper[4090]: I0309 16:24:13.514224 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.514226 master-0 kubenswrapper[4090]: I0309 16:24:13.514243 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.514226 master-0 kubenswrapper[4090]: I0309 16:24:13.514258 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.514678 master-0 kubenswrapper[4090]: I0309 16:24:13.514278 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.514678 master-0 kubenswrapper[4090]: I0309 16:24:13.514363 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-resolv-conf\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.514678 master-0 kubenswrapper[4090]: I0309 16:24:13.514470 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.514678 master-0 kubenswrapper[4090]: I0309 16:24:13.514582 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.514678 master-0 kubenswrapper[4090]: I0309 16:24:13.514671 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-ca-bundle\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.514843 master-0 kubenswrapper[4090]: I0309 16:24:13.514697 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjzs\" (UniqueName: \"kubernetes.io/projected/737facff-692c-4d57-a52b-e5f19b74ffd7-kube-api-access-hbjzs\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.514843 master-0 kubenswrapper[4090]: I0309 16:24:13.514726 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-sno-bootstrap-files\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.514843 master-0 kubenswrapper[4090]: I0309 16:24:13.514746 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.615871 master-0 kubenswrapper[4090]: I0309 16:24:13.615761 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.615871 master-0 kubenswrapper[4090]: I0309 16:24:13.615850 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.615871 master-0 kubenswrapper[4090]: I0309 16:24:13.615878 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.616174 master-0 kubenswrapper[4090]: I0309 16:24:13.616082 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.616174 master-0 kubenswrapper[4090]: I0309 16:24:13.616156 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-var-run-resolv-conf\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.616174 master-0 kubenswrapper[4090]: I0309 16:24:13.616161 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616200 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616238 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616292 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616322 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-resolv-conf\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616347 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616369 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616394 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-ca-bundle\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616410 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbjzs\" (UniqueName: \"kubernetes.io/projected/737facff-692c-4d57-a52b-e5f19b74ffd7-kube-api-access-hbjzs\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616483 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-sno-bootstrap-files\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616506 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616520 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-resolv-conf\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: E0309 16:24:13.616624 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: E0309 16:24:13.616726 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:14.116669009 +0000 UTC m=+47.291984098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616754 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-ca-bundle\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617209 master-0 kubenswrapper[4090]: I0309 16:24:13.616561 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-sno-bootstrap-files\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.617859 master-0 kubenswrapper[4090]: I0309 16:24:13.616808 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.617859 master-0 kubenswrapper[4090]: I0309 16:24:13.617095 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.617859 master-0 kubenswrapper[4090]: I0309 16:24:13.617579 4090 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 09 16:24:13.623449 master-0 kubenswrapper[4090]: I0309 16:24:13.623319 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.634461 master-0 kubenswrapper[4090]: I0309 16:24:13.634394 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:13.634666 master-0 kubenswrapper[4090]: I0309 16:24:13.634619 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.636006 master-0 kubenswrapper[4090]: I0309 16:24:13.635947 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbjzs\" (UniqueName: \"kubernetes.io/projected/737facff-692c-4d57-a52b-e5f19b74ffd7-kube-api-access-hbjzs\") pod \"assisted-installer-controller-rdwtz\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.731598 master-0 kubenswrapper[4090]: I0309 16:24:13.731525 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:13.742600 master-0 kubenswrapper[4090]: I0309 16:24:13.742539 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:24:13.753065 master-0 kubenswrapper[4090]: W0309 16:24:13.752891 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5565c060_5952_4e85_8873_18bb80663924.slice/crio-073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e WatchSource:0}: Error finding container 073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e: Status 404 returned error can't find the container with id 073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e Mar 09 16:24:14.121306 master-0 kubenswrapper[4090]: I0309 16:24:14.121237 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:14.121538 master-0 kubenswrapper[4090]: E0309 16:24:14.121371 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:14.121538 master-0 kubenswrapper[4090]: E0309 16:24:14.121468 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:15.121444927 +0000 UTC m=+48.296759936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:14.452225 master-0 kubenswrapper[4090]: I0309 16:24:14.452156 4090 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 09 16:24:14.678104 master-0 kubenswrapper[4090]: I0309 16:24:14.678044 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" event={"ID":"5565c060-5952-4e85-8873-18bb80663924","Type":"ContainerStarted","Data":"073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e"} Mar 09 16:24:14.680620 master-0 kubenswrapper[4090]: I0309 16:24:14.680555 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-rdwtz" event={"ID":"737facff-692c-4d57-a52b-e5f19b74ffd7","Type":"ContainerStarted","Data":"daf7607bf63c826880c277db5efe1d7b1c54664d8a874cf3cbfd77d87cef3162"} Mar 09 16:24:15.127931 master-0 kubenswrapper[4090]: I0309 16:24:15.127816 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:15.128183 master-0 kubenswrapper[4090]: E0309 16:24:15.128021 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:15.128183 master-0 kubenswrapper[4090]: E0309 16:24:15.128101 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:17.128078632 +0000 UTC m=+50.303393611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:17.141947 master-0 kubenswrapper[4090]: I0309 16:24:17.141616 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:17.142517 master-0 kubenswrapper[4090]: E0309 16:24:17.141924 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:17.142517 master-0 kubenswrapper[4090]: E0309 16:24:17.142104 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:21.142066454 +0000 UTC m=+54.317381443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:19.693920 master-0 kubenswrapper[4090]: I0309 16:24:19.693841 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" event={"ID":"5565c060-5952-4e85-8873-18bb80663924","Type":"ContainerStarted","Data":"a8d177dbb3aa3504d7da8194a33995b9c5590e73006f731e32a19254943a15e2"} Mar 09 16:24:19.695527 master-0 kubenswrapper[4090]: I0309 16:24:19.695391 4090 generic.go:334] "Generic (PLEG): container finished" podID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerID="8f1a1e060987b820e153c9d0c33ec719e219b362f2873a0c12439e503198da64" exitCode=0 Mar 09 16:24:19.695527 master-0 kubenswrapper[4090]: I0309 16:24:19.695491 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-rdwtz" event={"ID":"737facff-692c-4d57-a52b-e5f19b74ffd7","Type":"ContainerDied","Data":"8f1a1e060987b820e153c9d0c33ec719e219b362f2873a0c12439e503198da64"} Mar 09 16:24:19.706933 master-0 kubenswrapper[4090]: I0309 16:24:19.706844 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" podStartSLOduration=12.536436125 podStartE2EDuration="17.706830224s" podCreationTimestamp="2026-03-09 16:24:02 +0000 UTC" firstStartedPulling="2026-03-09 16:24:13.754700292 +0000 UTC m=+46.930015281" lastFinishedPulling="2026-03-09 16:24:18.925094391 +0000 UTC m=+52.100409380" observedRunningTime="2026-03-09 16:24:19.706656549 +0000 UTC m=+52.881971538" watchObservedRunningTime="2026-03-09 16:24:19.706830224 +0000 UTC m=+52.882145213" Mar 09 16:24:20.715829 master-0 kubenswrapper[4090]: I0309 16:24:20.715759 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:20.768601 master-0 kubenswrapper[4090]: I0309 16:24:20.768525 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-resolv-conf\") pod \"737facff-692c-4d57-a52b-e5f19b74ffd7\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " Mar 09 16:24:20.768601 master-0 kubenswrapper[4090]: I0309 16:24:20.768577 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbjzs\" (UniqueName: \"kubernetes.io/projected/737facff-692c-4d57-a52b-e5f19b74ffd7-kube-api-access-hbjzs\") pod \"737facff-692c-4d57-a52b-e5f19b74ffd7\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " Mar 09 16:24:20.768601 master-0 kubenswrapper[4090]: I0309 16:24:20.768600 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-sno-bootstrap-files\") pod \"737facff-692c-4d57-a52b-e5f19b74ffd7\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " Mar 09 16:24:20.768601 master-0 kubenswrapper[4090]: I0309 16:24:20.768624 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-var-run-resolv-conf\") pod \"737facff-692c-4d57-a52b-e5f19b74ffd7\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.768615 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "737facff-692c-4d57-a52b-e5f19b74ffd7" (UID: "737facff-692c-4d57-a52b-e5f19b74ffd7"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.768649 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-ca-bundle\") pod \"737facff-692c-4d57-a52b-e5f19b74ffd7\" (UID: \"737facff-692c-4d57-a52b-e5f19b74ffd7\") " Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.768688 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "737facff-692c-4d57-a52b-e5f19b74ffd7" (UID: "737facff-692c-4d57-a52b-e5f19b74ffd7"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.768943 4090 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.768959 4090 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.768948 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "737facff-692c-4d57-a52b-e5f19b74ffd7" (UID: "737facff-692c-4d57-a52b-e5f19b74ffd7"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:24:20.769287 master-0 kubenswrapper[4090]: I0309 16:24:20.769035 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "737facff-692c-4d57-a52b-e5f19b74ffd7" (UID: "737facff-692c-4d57-a52b-e5f19b74ffd7"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:24:20.774164 master-0 kubenswrapper[4090]: I0309 16:24:20.774101 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737facff-692c-4d57-a52b-e5f19b74ffd7-kube-api-access-hbjzs" (OuterVolumeSpecName: "kube-api-access-hbjzs") pod "737facff-692c-4d57-a52b-e5f19b74ffd7" (UID: "737facff-692c-4d57-a52b-e5f19b74ffd7"). InnerVolumeSpecName "kube-api-access-hbjzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:24:20.869496 master-0 kubenswrapper[4090]: I0309 16:24:20.869373 4090 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:24:20.869496 master-0 kubenswrapper[4090]: I0309 16:24:20.869408 4090 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbjzs\" (UniqueName: \"kubernetes.io/projected/737facff-692c-4d57-a52b-e5f19b74ffd7-kube-api-access-hbjzs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:24:20.869496 master-0 kubenswrapper[4090]: I0309 16:24:20.869439 4090 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/737facff-692c-4d57-a52b-e5f19b74ffd7-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 09 16:24:21.172162 master-0 kubenswrapper[4090]: I0309 16:24:21.171976 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:21.172559 master-0 kubenswrapper[4090]: E0309 16:24:21.172186 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:21.172559 master-0 kubenswrapper[4090]: E0309 16:24:21.172316 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:29.172283894 +0000 UTC m=+62.347598923 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:21.262632 master-0 kubenswrapper[4090]: I0309 16:24:21.262550 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-pkrt2"] Mar 09 16:24:21.262858 master-0 kubenswrapper[4090]: E0309 16:24:21.262662 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:24:21.262858 master-0 kubenswrapper[4090]: I0309 16:24:21.262683 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:24:21.262858 master-0 kubenswrapper[4090]: I0309 16:24:21.262739 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:24:21.262982 master-0 kubenswrapper[4090]: I0309 16:24:21.262919 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:21.373173 master-0 kubenswrapper[4090]: I0309 16:24:21.373056 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl56m\" (UniqueName: \"kubernetes.io/projected/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09-kube-api-access-xl56m\") pod \"mtu-prober-pkrt2\" (UID: \"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09\") " pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:21.474502 master-0 kubenswrapper[4090]: I0309 16:24:21.473602 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl56m\" (UniqueName: \"kubernetes.io/projected/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09-kube-api-access-xl56m\") pod \"mtu-prober-pkrt2\" (UID: \"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09\") " pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:21.489808 master-0 kubenswrapper[4090]: I0309 16:24:21.489745 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl56m\" (UniqueName: \"kubernetes.io/projected/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09-kube-api-access-xl56m\") pod \"mtu-prober-pkrt2\" (UID: \"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09\") " pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:21.518194 master-0 kubenswrapper[4090]: I0309 16:24:21.518150 4090 scope.go:117] "RemoveContainer" containerID="858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482" Mar 09 16:24:21.518361 master-0 kubenswrapper[4090]: I0309 16:24:21.518239 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 09 16:24:21.575045 master-0 kubenswrapper[4090]: I0309 16:24:21.574984 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:21.585761 master-0 kubenswrapper[4090]: W0309 16:24:21.585715 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d6b8350_34b6_4a0b_9027_3ea3c7e11d09.slice/crio-0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e WatchSource:0}: Error finding container 0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e: Status 404 returned error can't find the container with id 0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e Mar 09 16:24:21.705165 master-0 kubenswrapper[4090]: I0309 16:24:21.705119 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-rdwtz" event={"ID":"737facff-692c-4d57-a52b-e5f19b74ffd7","Type":"ContainerDied","Data":"daf7607bf63c826880c277db5efe1d7b1c54664d8a874cf3cbfd77d87cef3162"} Mar 09 16:24:21.705165 master-0 kubenswrapper[4090]: I0309 16:24:21.705163 4090 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf7607bf63c826880c277db5efe1d7b1c54664d8a874cf3cbfd77d87cef3162" Mar 09 16:24:21.705441 master-0 kubenswrapper[4090]: I0309 16:24:21.705276 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:24:21.706057 master-0 kubenswrapper[4090]: I0309 16:24:21.706020 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-pkrt2" event={"ID":"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09","Type":"ContainerStarted","Data":"0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e"} Mar 09 16:24:22.709775 master-0 kubenswrapper[4090]: I0309 16:24:22.709726 4090 generic.go:334] "Generic (PLEG): container finished" podID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerID="5f6392f9e974864cb8a576a8cc4e692a56b1538084351cbc64c608b35b4670f8" exitCode=0 Mar 09 16:24:22.710437 master-0 kubenswrapper[4090]: I0309 16:24:22.709800 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-pkrt2" event={"ID":"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09","Type":"ContainerDied","Data":"5f6392f9e974864cb8a576a8cc4e692a56b1538084351cbc64c608b35b4670f8"} Mar 09 16:24:22.712311 master-0 kubenswrapper[4090]: I0309 16:24:22.712287 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 09 16:24:22.712727 master-0 kubenswrapper[4090]: I0309 16:24:22.712677 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"57f6bbbfcfb537c0879739b1547de923304fd0f8bd8f06701d29220990585d09"} Mar 09 16:24:22.731644 master-0 kubenswrapper[4090]: I0309 16:24:22.731581 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.7315631790000001 podStartE2EDuration="1.731563179s" podCreationTimestamp="2026-03-09 16:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:24:22.731495937 +0000 UTC m=+55.906810926" watchObservedRunningTime="2026-03-09 16:24:22.731563179 +0000 UTC m=+55.906878168" Mar 09 16:24:23.736292 master-0 kubenswrapper[4090]: I0309 16:24:23.736238 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:23.793974 master-0 kubenswrapper[4090]: I0309 16:24:23.793911 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl56m\" (UniqueName: \"kubernetes.io/projected/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09-kube-api-access-xl56m\") pod \"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09\" (UID: \"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09\") " Mar 09 16:24:23.798852 master-0 kubenswrapper[4090]: I0309 16:24:23.798808 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09-kube-api-access-xl56m" (OuterVolumeSpecName: "kube-api-access-xl56m") pod "1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" (UID: "1d6b8350-34b6-4a0b-9027-3ea3c7e11d09"). InnerVolumeSpecName "kube-api-access-xl56m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:24:23.894722 master-0 kubenswrapper[4090]: I0309 16:24:23.894655 4090 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl56m\" (UniqueName: \"kubernetes.io/projected/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09-kube-api-access-xl56m\") on node \"master-0\" DevicePath \"\"" Mar 09 16:24:24.719124 master-0 kubenswrapper[4090]: I0309 16:24:24.719045 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-pkrt2" event={"ID":"1d6b8350-34b6-4a0b-9027-3ea3c7e11d09","Type":"ContainerDied","Data":"0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e"} Mar 09 16:24:24.719124 master-0 kubenswrapper[4090]: I0309 16:24:24.719085 4090 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e" Mar 09 16:24:24.719124 master-0 kubenswrapper[4090]: I0309 16:24:24.719107 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pkrt2" Mar 09 16:24:26.282727 master-0 kubenswrapper[4090]: I0309 16:24:26.282484 4090 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-pkrt2"] Mar 09 16:24:26.289946 master-0 kubenswrapper[4090]: I0309 16:24:26.289875 4090 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-pkrt2"] Mar 09 16:24:27.512080 master-0 kubenswrapper[4090]: I0309 16:24:27.512009 4090 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" path="/var/lib/kubelet/pods/1d6b8350-34b6-4a0b-9027-3ea3c7e11d09/volumes" Mar 09 16:24:29.236516 master-0 kubenswrapper[4090]: I0309 16:24:29.236398 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:29.237358 master-0 kubenswrapper[4090]: E0309 16:24:29.236708 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:29.237358 master-0 kubenswrapper[4090]: E0309 16:24:29.236884 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:45.236849136 +0000 UTC m=+78.412164125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:31.124511 master-0 kubenswrapper[4090]: I0309 16:24:31.124422 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gfqq8"] Mar 09 16:24:31.125085 master-0 kubenswrapper[4090]: E0309 16:24:31.124574 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerName="prober" Mar 09 16:24:31.125085 master-0 kubenswrapper[4090]: I0309 16:24:31.124595 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerName="prober" Mar 09 16:24:31.125085 master-0 kubenswrapper[4090]: I0309 16:24:31.124628 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerName="prober" Mar 09 16:24:31.125085 master-0 kubenswrapper[4090]: I0309 16:24:31.124894 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.126878 master-0 kubenswrapper[4090]: I0309 16:24:31.126820 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 09 16:24:31.127578 master-0 kubenswrapper[4090]: I0309 16:24:31.127549 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 09 16:24:31.128343 master-0 kubenswrapper[4090]: I0309 16:24:31.128310 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 09 16:24:31.130812 master-0 kubenswrapper[4090]: I0309 16:24:31.130777 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 09 16:24:31.252968 master-0 kubenswrapper[4090]: I0309 16:24:31.252819 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.252968 master-0 kubenswrapper[4090]: I0309 16:24:31.252901 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.253487 master-0 kubenswrapper[4090]: I0309 16:24:31.253066 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.253487 master-0 kubenswrapper[4090]: I0309 16:24:31.253181 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.253487 master-0 kubenswrapper[4090]: I0309 16:24:31.253281 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.253487 master-0 kubenswrapper[4090]: I0309 16:24:31.253322 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.253849 master-0 kubenswrapper[4090]: I0309 16:24:31.253537 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.253849 master-0 kubenswrapper[4090]: I0309 16:24:31.253740 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.254033 master-0 kubenswrapper[4090]: I0309 16:24:31.253911 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.254123 master-0 kubenswrapper[4090]: I0309 16:24:31.254064 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.254279 master-0 kubenswrapper[4090]: I0309 16:24:31.254215 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.254379 master-0 kubenswrapper[4090]: I0309 16:24:31.254357 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.254863 master-0 kubenswrapper[4090]: I0309 16:24:31.254523 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.254863 master-0 kubenswrapper[4090]: I0309 16:24:31.254712 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.255066 master-0 kubenswrapper[4090]: I0309 16:24:31.254899 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.255066 master-0 kubenswrapper[4090]: I0309 16:24:31.254998 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.255232 master-0 kubenswrapper[4090]: I0309 16:24:31.255027 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.333483 master-0 kubenswrapper[4090]: I0309 16:24:31.333403 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jkhls"] Mar 09 16:24:31.333942 master-0 kubenswrapper[4090]: I0309 16:24:31.333912 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.337120 master-0 kubenswrapper[4090]: I0309 16:24:31.337076 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 09 16:24:31.339376 master-0 kubenswrapper[4090]: I0309 16:24:31.339341 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 09 16:24:31.355952 master-0 kubenswrapper[4090]: I0309 16:24:31.355884 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.355952 master-0 kubenswrapper[4090]: I0309 16:24:31.355938 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356292 master-0 kubenswrapper[4090]: I0309 16:24:31.356121 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356292 master-0 kubenswrapper[4090]: I0309 16:24:31.356115 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356292 master-0 kubenswrapper[4090]: I0309 16:24:31.356170 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356292 master-0 kubenswrapper[4090]: I0309 16:24:31.356195 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356292 master-0 kubenswrapper[4090]: I0309 16:24:31.356172 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356292 master-0 kubenswrapper[4090]: I0309 16:24:31.356247 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356339 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356367 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356394 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356418 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356456 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356465 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356499 master-0 kubenswrapper[4090]: I0309 16:24:31.356478 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356704 master-0 kubenswrapper[4090]: I0309 16:24:31.356582 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.356957 master-0 kubenswrapper[4090]: I0309 16:24:31.356882 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.357038 master-0 kubenswrapper[4090]: I0309 16:24:31.357013 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.357090 master-0 kubenswrapper[4090]: I0309 16:24:31.357068 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.357090 master-0 kubenswrapper[4090]: I0309 16:24:31.357085 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357135 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357156 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357142 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357257 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357338 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357372 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357311 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357475 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357526 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357630 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357646 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357685 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.359323 master-0 kubenswrapper[4090]: I0309 16:24:31.357713 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.374611 master-0 kubenswrapper[4090]: I0309 16:24:31.374423 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.438279 master-0 kubenswrapper[4090]: I0309 16:24:31.438171 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gfqq8" Mar 09 16:24:31.451335 master-0 kubenswrapper[4090]: W0309 16:24:31.450878 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2ec8b2_02d7_40c4_ac20_32615d689697.slice/crio-de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70 WatchSource:0}: Error finding container de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70: Status 404 returned error can't find the container with id de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70 Mar 09 16:24:31.458286 master-0 kubenswrapper[4090]: I0309 16:24:31.458210 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458286 master-0 kubenswrapper[4090]: I0309 16:24:31.458285 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458605 master-0 kubenswrapper[4090]: I0309 16:24:31.458326 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458605 master-0 kubenswrapper[4090]: I0309 16:24:31.458360 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458605 master-0 kubenswrapper[4090]: I0309 16:24:31.458455 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458605 master-0 kubenswrapper[4090]: I0309 16:24:31.458560 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458605 master-0 kubenswrapper[4090]: I0309 16:24:31.458592 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.458765 master-0 kubenswrapper[4090]: I0309 16:24:31.458615 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559526 master-0 kubenswrapper[4090]: I0309 16:24:31.559454 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559526 master-0 kubenswrapper[4090]: I0309 16:24:31.559499 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559526 master-0 kubenswrapper[4090]: I0309 16:24:31.559516 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559777 master-0 kubenswrapper[4090]: I0309 16:24:31.559705 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559807 master-0 kubenswrapper[4090]: I0309 16:24:31.559796 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559835 master-0 kubenswrapper[4090]: I0309 16:24:31.559818 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559880 master-0 kubenswrapper[4090]: I0309 16:24:31.559856 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559914 master-0 kubenswrapper[4090]: I0309 16:24:31.559886 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.559914 master-0 kubenswrapper[4090]: I0309 16:24:31.559907 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.560087 master-0 kubenswrapper[4090]: I0309 16:24:31.559973 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.560087 master-0 kubenswrapper[4090]: I0309 16:24:31.560030 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.560087 master-0 kubenswrapper[4090]: I0309 16:24:31.560058 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.560757 master-0 kubenswrapper[4090]: I0309 16:24:31.560403 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.560757 master-0 kubenswrapper[4090]: I0309 16:24:31.560703 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.560757 master-0 kubenswrapper[4090]: I0309 16:24:31.560724 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.575691 master-0 kubenswrapper[4090]: I0309 16:24:31.575635 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.649264 master-0 kubenswrapper[4090]: I0309 16:24:31.648584 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:24:31.658112 master-0 kubenswrapper[4090]: W0309 16:24:31.658015 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba020e0_1728_4e56_9618_d0ec3d9126eb.slice/crio-79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409 WatchSource:0}: Error finding container 79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409: Status 404 returned error can't find the container with id 79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409 Mar 09 16:24:31.737126 master-0 kubenswrapper[4090]: I0309 16:24:31.737037 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfqq8" event={"ID":"df2ec8b2-02d7-40c4-ac20-32615d689697","Type":"ContainerStarted","Data":"de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70"} Mar 09 16:24:31.738048 master-0 kubenswrapper[4090]: I0309 16:24:31.738001 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerStarted","Data":"79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409"} Mar 09 16:24:32.122093 master-0 kubenswrapper[4090]: I0309 16:24:32.121984 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n7slb"] Mar 09 16:24:32.123003 master-0 kubenswrapper[4090]: I0309 16:24:32.122547 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.123003 master-0 kubenswrapper[4090]: E0309 16:24:32.122645 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:32.264983 master-0 kubenswrapper[4090]: I0309 16:24:32.264925 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.264983 master-0 kubenswrapper[4090]: I0309 16:24:32.264983 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.365660 master-0 kubenswrapper[4090]: I0309 16:24:32.365609 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.365660 master-0 kubenswrapper[4090]: I0309 16:24:32.365663 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.365921 master-0 kubenswrapper[4090]: E0309 16:24:32.365859 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:32.366042 master-0 kubenswrapper[4090]: E0309 16:24:32.366011 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:32.865968708 +0000 UTC m=+66.041283697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:32.384430 master-0 kubenswrapper[4090]: I0309 16:24:32.384283 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.870165 master-0 kubenswrapper[4090]: I0309 16:24:32.870093 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:32.870494 master-0 kubenswrapper[4090]: E0309 16:24:32.870432 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:32.870604 master-0 kubenswrapper[4090]: E0309 16:24:32.870574 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:33.870538883 +0000 UTC m=+67.045853872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:33.508344 master-0 kubenswrapper[4090]: I0309 16:24:33.507865 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:33.508344 master-0 kubenswrapper[4090]: E0309 16:24:33.508079 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:33.879563 master-0 kubenswrapper[4090]: I0309 16:24:33.879401 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:33.879831 master-0 kubenswrapper[4090]: E0309 16:24:33.879566 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:33.879831 master-0 kubenswrapper[4090]: E0309 16:24:33.879623 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:35.879609024 +0000 UTC m=+69.054924013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:34.749182 master-0 kubenswrapper[4090]: I0309 16:24:34.749101 4090 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="e18e252fd560cea1fe0cd7cc5f8a186dd08bab19f2d2e38f70e4a77bd4ec31c0" exitCode=0 Mar 09 16:24:34.749182 master-0 kubenswrapper[4090]: I0309 16:24:34.749187 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerDied","Data":"e18e252fd560cea1fe0cd7cc5f8a186dd08bab19f2d2e38f70e4a77bd4ec31c0"} Mar 09 16:24:35.508083 master-0 kubenswrapper[4090]: I0309 16:24:35.508035 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:35.508338 master-0 kubenswrapper[4090]: E0309 16:24:35.508210 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:35.897172 master-0 kubenswrapper[4090]: I0309 16:24:35.896988 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:35.902935 master-0 kubenswrapper[4090]: E0309 16:24:35.897204 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:35.902935 master-0 kubenswrapper[4090]: E0309 16:24:35.897307 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:39.897281959 +0000 UTC m=+73.072596938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:37.509401 master-0 kubenswrapper[4090]: I0309 16:24:37.508303 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:37.509401 master-0 kubenswrapper[4090]: E0309 16:24:37.508560 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:39.508193 master-0 kubenswrapper[4090]: I0309 16:24:39.508116 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:39.508880 master-0 kubenswrapper[4090]: E0309 16:24:39.508277 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:39.929075 master-0 kubenswrapper[4090]: I0309 16:24:39.928933 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:39.929254 master-0 kubenswrapper[4090]: E0309 16:24:39.929108 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:39.929254 master-0 kubenswrapper[4090]: E0309 16:24:39.929169 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:47.929149547 +0000 UTC m=+81.104464536 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:41.507878 master-0 kubenswrapper[4090]: I0309 16:24:41.507738 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:41.507878 master-0 kubenswrapper[4090]: E0309 16:24:41.507843 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:42.516045 master-0 kubenswrapper[4090]: W0309 16:24:42.515994 4090 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 09 16:24:42.516659 master-0 kubenswrapper[4090]: I0309 16:24:42.516490 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 09 16:24:43.536912 master-0 kubenswrapper[4090]: I0309 16:24:43.535480 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:43.536912 master-0 kubenswrapper[4090]: E0309 16:24:43.535592 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:43.562344 master-0 kubenswrapper[4090]: I0309 16:24:43.562295 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x"] Mar 09 16:24:43.562697 master-0 kubenswrapper[4090]: I0309 16:24:43.562675 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.564993 master-0 kubenswrapper[4090]: I0309 16:24:43.564864 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 09 16:24:43.564993 master-0 kubenswrapper[4090]: I0309 16:24:43.564886 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 09 16:24:43.566938 master-0 kubenswrapper[4090]: I0309 16:24:43.566498 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 09 16:24:43.566938 master-0 kubenswrapper[4090]: I0309 16:24:43.566722 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 09 16:24:43.566938 master-0 kubenswrapper[4090]: I0309 16:24:43.566836 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 09 16:24:43.657387 master-0 kubenswrapper[4090]: I0309 16:24:43.657248 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.657387 master-0 kubenswrapper[4090]: I0309 16:24:43.657320 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.657387 master-0 kubenswrapper[4090]: I0309 16:24:43.657349 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.657657 master-0 kubenswrapper[4090]: I0309 16:24:43.657534 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.728080 master-0 kubenswrapper[4090]: I0309 16:24:43.727825 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=1.727813893 podStartE2EDuration="1.727813893s" podCreationTimestamp="2026-03-09 16:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:24:43.591954996 +0000 UTC m=+76.767269995" watchObservedRunningTime="2026-03-09 16:24:43.727813893 +0000 UTC m=+76.903128882" Mar 09 16:24:43.728927 master-0 kubenswrapper[4090]: I0309 16:24:43.728307 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdrtp"] Mar 09 16:24:43.728927 master-0 kubenswrapper[4090]: I0309 16:24:43.728866 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.735528 master-0 kubenswrapper[4090]: I0309 16:24:43.731453 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 16:24:43.735528 master-0 kubenswrapper[4090]: I0309 16:24:43.731791 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 16:24:43.765801 master-0 kubenswrapper[4090]: I0309 16:24:43.763862 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.765801 master-0 kubenswrapper[4090]: I0309 16:24:43.763955 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.765801 master-0 kubenswrapper[4090]: I0309 16:24:43.764030 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.765801 master-0 kubenswrapper[4090]: I0309 16:24:43.764085 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.765801 master-0 kubenswrapper[4090]: I0309 16:24:43.765482 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.767209 master-0 kubenswrapper[4090]: I0309 16:24:43.767105 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.770275 master-0 kubenswrapper[4090]: I0309 16:24:43.770241 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.786365 master-0 kubenswrapper[4090]: I0309 16:24:43.786248 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.865514 master-0 kubenswrapper[4090]: I0309 16:24:43.865397 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-slash\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865514 master-0 kubenswrapper[4090]: I0309 16:24:43.865513 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-ovn\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865543 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865577 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgsn\" (UniqueName: \"kubernetes.io/projected/2a584527-bd42-4982-91c8-8f4c833dbfb5-kube-api-access-2fgsn\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865606 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865629 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-kubelet\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865656 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-var-lib-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865684 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-systemd\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865708 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-env-overrides\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865730 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovn-node-metrics-cert\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865752 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-script-lib\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.865806 master-0 kubenswrapper[4090]: I0309 16:24:43.865796 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-systemd-units\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865820 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865846 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-config\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865880 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-log-socket\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865901 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-bin\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865927 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-netd\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865953 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-netns\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865975 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-node-log\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.866093 master-0 kubenswrapper[4090]: I0309 16:24:43.865997 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-etc-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.883288 master-0 kubenswrapper[4090]: I0309 16:24:43.883217 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:24:43.967615 master-0 kubenswrapper[4090]: I0309 16:24:43.966981 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-netns\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967615 master-0 kubenswrapper[4090]: I0309 16:24:43.967517 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-etc-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967615 master-0 kubenswrapper[4090]: I0309 16:24:43.967542 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-node-log\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967615 master-0 kubenswrapper[4090]: I0309 16:24:43.967114 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-netns\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967615 master-0 kubenswrapper[4090]: I0309 16:24:43.967562 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-slash\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967726 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-etc-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967816 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-slash\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967875 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-ovn\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967901 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967922 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fgsn\" (UniqueName: \"kubernetes.io/projected/2a584527-bd42-4982-91c8-8f4c833dbfb5-kube-api-access-2fgsn\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967947 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.967970 master-0 kubenswrapper[4090]: I0309 16:24:43.967962 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-kubelet\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.967983 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-var-lib-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968025 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968024 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968066 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-node-log\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968083 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-systemd\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968113 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-env-overrides\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968140 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovn-node-metrics-cert\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.968186 master-0 kubenswrapper[4090]: I0309 16:24:43.968165 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-script-lib\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968230 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-systemd-units\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968255 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968289 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-config\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968325 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-log-socket\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968348 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-bin\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968462 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-kubelet\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.968965 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-netd\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969106 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-log-socket\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969140 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-systemd-units\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969114 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-netd\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969189 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-ovn-kubernetes\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969231 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-systemd\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969253 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-ovn\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969263 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-bin\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969346 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-var-lib-openvswitch\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969473 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-env-overrides\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.969692 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-config\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.970324 master-0 kubenswrapper[4090]: I0309 16:24:43.970183 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-script-lib\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.972934 master-0 kubenswrapper[4090]: I0309 16:24:43.972901 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovn-node-metrics-cert\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:43.987540 master-0 kubenswrapper[4090]: I0309 16:24:43.987471 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fgsn\" (UniqueName: \"kubernetes.io/projected/2a584527-bd42-4982-91c8-8f4c833dbfb5-kube-api-access-2fgsn\") pod \"ovnkube-node-gdrtp\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:44.044065 master-0 kubenswrapper[4090]: I0309 16:24:44.044000 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:24:44.065724 master-0 kubenswrapper[4090]: W0309 16:24:44.065653 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a584527_bd42_4982_91c8_8f4c833dbfb5.slice/crio-dea16b36ad7acaaf470238d84702fe60241154a57b0bc52f42d559e22bc82e3c WatchSource:0}: Error finding container dea16b36ad7acaaf470238d84702fe60241154a57b0bc52f42d559e22bc82e3c: Status 404 returned error can't find the container with id dea16b36ad7acaaf470238d84702fe60241154a57b0bc52f42d559e22bc82e3c Mar 09 16:24:44.780706 master-0 kubenswrapper[4090]: I0309 16:24:44.778821 4090 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="e85c846d70b2880d50adc9dc310cb9743473eb6e96f2c0617b7d1adfb1817ac6" exitCode=0 Mar 09 16:24:44.780706 master-0 kubenswrapper[4090]: I0309 16:24:44.778878 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerDied","Data":"e85c846d70b2880d50adc9dc310cb9743473eb6e96f2c0617b7d1adfb1817ac6"} Mar 09 16:24:44.784584 master-0 kubenswrapper[4090]: I0309 16:24:44.784540 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfqq8" event={"ID":"df2ec8b2-02d7-40c4-ac20-32615d689697","Type":"ContainerStarted","Data":"2af9ae9dfcc544df7b8875ba8f9fa90a069d7b59b85e3265587d53f188319aa4"} Mar 09 16:24:44.786449 master-0 kubenswrapper[4090]: I0309 16:24:44.786395 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"dea16b36ad7acaaf470238d84702fe60241154a57b0bc52f42d559e22bc82e3c"} Mar 09 16:24:44.787743 master-0 kubenswrapper[4090]: I0309 16:24:44.787713 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" event={"ID":"e4895f22-8fcd-4ace-96d8-bc2e18a67891","Type":"ContainerStarted","Data":"88eb2d0dc5a238bce37b255f1fb97a8860b07b3a0e7d0cdbeec5cbf7626365d6"} Mar 09 16:24:44.787743 master-0 kubenswrapper[4090]: I0309 16:24:44.787739 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" event={"ID":"e4895f22-8fcd-4ace-96d8-bc2e18a67891","Type":"ContainerStarted","Data":"2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4"} Mar 09 16:24:45.279600 master-0 kubenswrapper[4090]: I0309 16:24:45.279536 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:24:45.279799 master-0 kubenswrapper[4090]: E0309 16:24:45.279705 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:45.279799 master-0 kubenswrapper[4090]: E0309 16:24:45.279776 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:17.279752081 +0000 UTC m=+110.455067070 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:24:45.508190 master-0 kubenswrapper[4090]: I0309 16:24:45.507743 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:45.508190 master-0 kubenswrapper[4090]: E0309 16:24:45.507875 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:47.511623 master-0 kubenswrapper[4090]: I0309 16:24:47.511558 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:47.512174 master-0 kubenswrapper[4090]: E0309 16:24:47.511678 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:47.798179 master-0 kubenswrapper[4090]: I0309 16:24:47.798016 4090 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="1c8c260da059200c19ff4508a0a4e27c1306ddf0f97c62b30fb7ed75be818372" exitCode=0 Mar 09 16:24:47.798179 master-0 kubenswrapper[4090]: I0309 16:24:47.798090 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerDied","Data":"1c8c260da059200c19ff4508a0a4e27c1306ddf0f97c62b30fb7ed75be818372"} Mar 09 16:24:48.171531 master-0 kubenswrapper[4090]: I0309 16:24:48.171324 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:48.171794 master-0 kubenswrapper[4090]: E0309 16:24:48.171543 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:48.171794 master-0 kubenswrapper[4090]: E0309 16:24:48.171681 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:04.171649113 +0000 UTC m=+97.346964142 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:24:48.817366 master-0 kubenswrapper[4090]: I0309 16:24:48.817181 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gfqq8" podStartSLOduration=5.349496592 podStartE2EDuration="17.817157917s" podCreationTimestamp="2026-03-09 16:24:31 +0000 UTC" firstStartedPulling="2026-03-09 16:24:31.453257694 +0000 UTC m=+64.628572683" lastFinishedPulling="2026-03-09 16:24:43.920919019 +0000 UTC m=+77.096234008" observedRunningTime="2026-03-09 16:24:44.811269284 +0000 UTC m=+77.986584273" watchObservedRunningTime="2026-03-09 16:24:48.817157917 +0000 UTC m=+81.992472916" Mar 09 16:24:49.508038 master-0 kubenswrapper[4090]: I0309 16:24:49.507989 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:49.508255 master-0 kubenswrapper[4090]: E0309 16:24:49.508193 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:49.875239 master-0 kubenswrapper[4090]: I0309 16:24:49.875108 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-ncskk"] Mar 09 16:24:49.875779 master-0 kubenswrapper[4090]: I0309 16:24:49.875470 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:49.875779 master-0 kubenswrapper[4090]: E0309 16:24:49.875541 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:24:49.891157 master-0 kubenswrapper[4090]: I0309 16:24:49.891057 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:49.992537 master-0 kubenswrapper[4090]: I0309 16:24:49.992102 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:50.272776 master-0 kubenswrapper[4090]: E0309 16:24:50.272705 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:24:50.272776 master-0 kubenswrapper[4090]: E0309 16:24:50.272751 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:24:50.272776 master-0 kubenswrapper[4090]: E0309 16:24:50.272766 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:50.273180 master-0 kubenswrapper[4090]: E0309 16:24:50.272848 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:24:50.772827361 +0000 UTC m=+83.948142350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:50.799847 master-0 kubenswrapper[4090]: I0309 16:24:50.799548 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:50.799847 master-0 kubenswrapper[4090]: E0309 16:24:50.799790 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:24:50.799847 master-0 kubenswrapper[4090]: E0309 16:24:50.799843 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:24:50.799847 master-0 kubenswrapper[4090]: E0309 16:24:50.799864 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:50.800251 master-0 kubenswrapper[4090]: E0309 16:24:50.799951 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:24:51.799926686 +0000 UTC m=+84.975241705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:51.375832 master-0 kubenswrapper[4090]: I0309 16:24:51.375724 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-nqwd2"] Mar 09 16:24:51.376455 master-0 kubenswrapper[4090]: I0309 16:24:51.376113 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.377980 master-0 kubenswrapper[4090]: I0309 16:24:51.377528 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 09 16:24:51.377980 master-0 kubenswrapper[4090]: I0309 16:24:51.377928 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 09 16:24:51.378524 master-0 kubenswrapper[4090]: I0309 16:24:51.378227 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 09 16:24:51.378677 master-0 kubenswrapper[4090]: I0309 16:24:51.378590 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 09 16:24:51.378677 master-0 kubenswrapper[4090]: I0309 16:24:51.378611 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 09 16:24:51.405093 master-0 kubenswrapper[4090]: I0309 16:24:51.404403 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.405093 master-0 kubenswrapper[4090]: I0309 16:24:51.404484 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.405093 master-0 kubenswrapper[4090]: I0309 16:24:51.404526 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.405093 master-0 kubenswrapper[4090]: I0309 16:24:51.404552 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.505133 master-0 kubenswrapper[4090]: I0309 16:24:51.505002 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.505133 master-0 kubenswrapper[4090]: I0309 16:24:51.505055 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.505133 master-0 kubenswrapper[4090]: I0309 16:24:51.505089 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.505312 master-0 kubenswrapper[4090]: I0309 16:24:51.505281 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.505587 master-0 kubenswrapper[4090]: E0309 16:24:51.505545 4090 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 09 16:24:51.505797 master-0 kubenswrapper[4090]: E0309 16:24:51.505620 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert podName:60e07bf5-933c-4ff6-9a1a-2fd05392c8e9 nodeName:}" failed. No retries permitted until 2026-03-09 16:24:52.005601192 +0000 UTC m=+85.180916231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert") pod "network-node-identity-nqwd2" (UID: "60e07bf5-933c-4ff6-9a1a-2fd05392c8e9") : secret "network-node-identity-cert" not found Mar 09 16:24:51.508788 master-0 kubenswrapper[4090]: I0309 16:24:51.508564 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:51.508788 master-0 kubenswrapper[4090]: E0309 16:24:51.508670 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:24:51.508788 master-0 kubenswrapper[4090]: I0309 16:24:51.508572 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:51.508788 master-0 kubenswrapper[4090]: E0309 16:24:51.508741 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:51.511528 master-0 kubenswrapper[4090]: I0309 16:24:51.511091 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.511717 master-0 kubenswrapper[4090]: I0309 16:24:51.511698 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.528060 master-0 kubenswrapper[4090]: I0309 16:24:51.527566 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:51.806975 master-0 kubenswrapper[4090]: I0309 16:24:51.806825 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:51.807163 master-0 kubenswrapper[4090]: E0309 16:24:51.807002 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:24:51.807163 master-0 kubenswrapper[4090]: E0309 16:24:51.807018 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:24:51.807163 master-0 kubenswrapper[4090]: E0309 16:24:51.807029 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:51.807163 master-0 kubenswrapper[4090]: E0309 16:24:51.807084 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:24:53.807067094 +0000 UTC m=+86.982382083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:51.812950 master-0 kubenswrapper[4090]: I0309 16:24:51.812905 4090 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="0e4dffbedd2651da68c4f09131df95460c21cf12adecaf4ed6c71f35a722b888" exitCode=0 Mar 09 16:24:51.813035 master-0 kubenswrapper[4090]: I0309 16:24:51.812961 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerDied","Data":"0e4dffbedd2651da68c4f09131df95460c21cf12adecaf4ed6c71f35a722b888"} Mar 09 16:24:52.010069 master-0 kubenswrapper[4090]: I0309 16:24:52.009538 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:52.012888 master-0 kubenswrapper[4090]: I0309 16:24:52.012835 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:52.290926 master-0 kubenswrapper[4090]: I0309 16:24:52.290495 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:24:52.301224 master-0 kubenswrapper[4090]: W0309 16:24:52.301186 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60e07bf5_933c_4ff6_9a1a_2fd05392c8e9.slice/crio-2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7 WatchSource:0}: Error finding container 2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7: Status 404 returned error can't find the container with id 2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7 Mar 09 16:24:52.819557 master-0 kubenswrapper[4090]: I0309 16:24:52.819318 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerStarted","Data":"2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7"} Mar 09 16:24:53.508163 master-0 kubenswrapper[4090]: I0309 16:24:53.508102 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:53.508364 master-0 kubenswrapper[4090]: E0309 16:24:53.508287 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:53.508601 master-0 kubenswrapper[4090]: I0309 16:24:53.508121 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:53.508601 master-0 kubenswrapper[4090]: E0309 16:24:53.508560 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:24:53.825561 master-0 kubenswrapper[4090]: I0309 16:24:53.825355 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:53.826072 master-0 kubenswrapper[4090]: E0309 16:24:53.825630 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:24:53.826072 master-0 kubenswrapper[4090]: E0309 16:24:53.825656 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:24:53.826072 master-0 kubenswrapper[4090]: E0309 16:24:53.825669 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:53.826072 master-0 kubenswrapper[4090]: E0309 16:24:53.825749 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:24:57.825729978 +0000 UTC m=+91.001044967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:55.508774 master-0 kubenswrapper[4090]: I0309 16:24:55.508442 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:55.511829 master-0 kubenswrapper[4090]: I0309 16:24:55.508441 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:55.511829 master-0 kubenswrapper[4090]: E0309 16:24:55.508996 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:24:55.511829 master-0 kubenswrapper[4090]: E0309 16:24:55.508860 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:57.508415 master-0 kubenswrapper[4090]: I0309 16:24:57.508370 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:57.508994 master-0 kubenswrapper[4090]: E0309 16:24:57.508958 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:57.509330 master-0 kubenswrapper[4090]: I0309 16:24:57.509298 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:57.509385 master-0 kubenswrapper[4090]: E0309 16:24:57.509363 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:24:57.869273 master-0 kubenswrapper[4090]: I0309 16:24:57.869147 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:57.869494 master-0 kubenswrapper[4090]: E0309 16:24:57.869299 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:24:57.869494 master-0 kubenswrapper[4090]: E0309 16:24:57.869317 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:24:57.869494 master-0 kubenswrapper[4090]: E0309 16:24:57.869329 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:57.869494 master-0 kubenswrapper[4090]: E0309 16:24:57.869381 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:05.86936266 +0000 UTC m=+99.044677649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:24:59.508558 master-0 kubenswrapper[4090]: I0309 16:24:59.508406 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:24:59.508558 master-0 kubenswrapper[4090]: I0309 16:24:59.508451 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:24:59.509182 master-0 kubenswrapper[4090]: E0309 16:24:59.508582 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:24:59.509182 master-0 kubenswrapper[4090]: E0309 16:24:59.508667 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:01.508080 master-0 kubenswrapper[4090]: I0309 16:25:01.508019 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:01.508686 master-0 kubenswrapper[4090]: I0309 16:25:01.508066 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:01.508686 master-0 kubenswrapper[4090]: E0309 16:25:01.508159 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:01.508686 master-0 kubenswrapper[4090]: E0309 16:25:01.508272 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:02.164155 master-0 kubenswrapper[4090]: I0309 16:25:02.164094 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:25:03.508316 master-0 kubenswrapper[4090]: I0309 16:25:03.508194 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:03.508888 master-0 kubenswrapper[4090]: I0309 16:25:03.508188 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:03.508888 master-0 kubenswrapper[4090]: E0309 16:25:03.508346 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:03.508888 master-0 kubenswrapper[4090]: E0309 16:25:03.508597 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:04.223749 master-0 kubenswrapper[4090]: I0309 16:25:04.223668 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:04.223990 master-0 kubenswrapper[4090]: E0309 16:25:04.223829 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:25:04.223990 master-0 kubenswrapper[4090]: E0309 16:25:04.223896 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.223879729 +0000 UTC m=+129.399194718 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 09 16:25:04.848628 master-0 kubenswrapper[4090]: I0309 16:25:04.848577 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" exitCode=0 Mar 09 16:25:04.849478 master-0 kubenswrapper[4090]: I0309 16:25:04.848661 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} Mar 09 16:25:04.851292 master-0 kubenswrapper[4090]: I0309 16:25:04.851250 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" event={"ID":"e4895f22-8fcd-4ace-96d8-bc2e18a67891","Type":"ContainerStarted","Data":"127fddf033d016698d708311f1ce4a751f3a2f860d40130a5519cb0b6938e0a1"} Mar 09 16:25:04.853444 master-0 kubenswrapper[4090]: I0309 16:25:04.853351 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerStarted","Data":"c33568491251a6cc29f433d394d9f99ae4624c6f4d925ee43ed4349c74f3003e"} Mar 09 16:25:04.853444 master-0 kubenswrapper[4090]: I0309 16:25:04.853402 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerStarted","Data":"19e7a93debcd60ad8d43d355b5089beeffc7d6f90b26eba24b2b21c548818ffa"} Mar 09 16:25:04.856654 master-0 kubenswrapper[4090]: I0309 16:25:04.856605 4090 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="db91761f4ed69865df84925e7d692b45a5d00ca5d8cda47d3e02e2821fc11818" exitCode=0 Mar 09 16:25:04.856654 master-0 kubenswrapper[4090]: I0309 16:25:04.856652 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerDied","Data":"db91761f4ed69865df84925e7d692b45a5d00ca5d8cda47d3e02e2821fc11818"} Mar 09 16:25:04.865897 master-0 kubenswrapper[4090]: I0309 16:25:04.865234 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=3.865199841 podStartE2EDuration="3.865199841s" podCreationTimestamp="2026-03-09 16:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:25:04.863585053 +0000 UTC m=+98.038900042" watchObservedRunningTime="2026-03-09 16:25:04.865199841 +0000 UTC m=+98.040514850" Mar 09 16:25:04.943640 master-0 kubenswrapper[4090]: I0309 16:25:04.940989 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" podStartSLOduration=2.315581436 podStartE2EDuration="21.940961528s" podCreationTimestamp="2026-03-09 16:24:43 +0000 UTC" firstStartedPulling="2026-03-09 16:24:44.113990513 +0000 UTC m=+77.289305512" lastFinishedPulling="2026-03-09 16:25:03.739370615 +0000 UTC m=+96.914685604" observedRunningTime="2026-03-09 16:25:04.920512801 +0000 UTC m=+98.095827790" watchObservedRunningTime="2026-03-09 16:25:04.940961528 +0000 UTC m=+98.116276517" Mar 09 16:25:04.943640 master-0 kubenswrapper[4090]: I0309 16:25:04.941542 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-nqwd2" podStartSLOduration=2.153024153 podStartE2EDuration="13.941532334s" podCreationTimestamp="2026-03-09 16:24:51 +0000 UTC" firstStartedPulling="2026-03-09 16:24:52.304509425 +0000 UTC m=+85.479824414" lastFinishedPulling="2026-03-09 16:25:04.093017606 +0000 UTC m=+97.268332595" observedRunningTime="2026-03-09 16:25:04.938382703 +0000 UTC m=+98.113697702" watchObservedRunningTime="2026-03-09 16:25:04.941532334 +0000 UTC m=+98.116847323" Mar 09 16:25:05.508693 master-0 kubenswrapper[4090]: I0309 16:25:05.508383 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:05.508931 master-0 kubenswrapper[4090]: I0309 16:25:05.508403 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:05.508931 master-0 kubenswrapper[4090]: E0309 16:25:05.508759 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:05.508931 master-0 kubenswrapper[4090]: E0309 16:25:05.508843 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:05.865070 master-0 kubenswrapper[4090]: I0309 16:25:05.864982 4090 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="f92b3bf64fc4165da416ac63f159e2be71d6527248ee0c28520165449adf5e4e" exitCode=0 Mar 09 16:25:05.865070 master-0 kubenswrapper[4090]: I0309 16:25:05.865075 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerDied","Data":"f92b3bf64fc4165da416ac63f159e2be71d6527248ee0c28520165449adf5e4e"} Mar 09 16:25:05.873952 master-0 kubenswrapper[4090]: I0309 16:25:05.873853 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} Mar 09 16:25:05.873952 master-0 kubenswrapper[4090]: I0309 16:25:05.873908 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} Mar 09 16:25:05.873952 master-0 kubenswrapper[4090]: I0309 16:25:05.873921 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} Mar 09 16:25:05.873952 master-0 kubenswrapper[4090]: I0309 16:25:05.873942 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} Mar 09 16:25:05.873952 master-0 kubenswrapper[4090]: I0309 16:25:05.873954 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} Mar 09 16:25:05.873952 master-0 kubenswrapper[4090]: I0309 16:25:05.873968 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} Mar 09 16:25:05.940730 master-0 kubenswrapper[4090]: I0309 16:25:05.940665 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:05.941288 master-0 kubenswrapper[4090]: E0309 16:25:05.941260 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:25:05.941288 master-0 kubenswrapper[4090]: E0309 16:25:05.941286 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:25:05.941386 master-0 kubenswrapper[4090]: E0309 16:25:05.941299 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:25:05.941386 master-0 kubenswrapper[4090]: E0309 16:25:05.941339 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:21.941324809 +0000 UTC m=+115.116639798 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:25:06.519254 master-0 kubenswrapper[4090]: I0309 16:25:06.519165 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 09 16:25:06.879906 master-0 kubenswrapper[4090]: I0309 16:25:06.879732 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jkhls" event={"ID":"1ba020e0-1728-4e56-9618-d0ec3d9126eb","Type":"ContainerStarted","Data":"e84e1588f228f113c9d1e9b97ac73f3346e599805dec6f9d913bffd7c1e8fe3a"} Mar 09 16:25:06.916227 master-0 kubenswrapper[4090]: I0309 16:25:06.916128 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jkhls" podStartSLOduration=3.797403094 podStartE2EDuration="35.916103874s" podCreationTimestamp="2026-03-09 16:24:31 +0000 UTC" firstStartedPulling="2026-03-09 16:24:31.661073071 +0000 UTC m=+64.836388060" lastFinishedPulling="2026-03-09 16:25:03.779773851 +0000 UTC m=+96.955088840" observedRunningTime="2026-03-09 16:25:06.898723747 +0000 UTC m=+100.074038736" watchObservedRunningTime="2026-03-09 16:25:06.916103874 +0000 UTC m=+100.091418863" Mar 09 16:25:07.507395 master-0 kubenswrapper[4090]: I0309 16:25:07.507355 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:07.507491 master-0 kubenswrapper[4090]: I0309 16:25:07.507351 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:07.508758 master-0 kubenswrapper[4090]: E0309 16:25:07.508706 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:07.508966 master-0 kubenswrapper[4090]: E0309 16:25:07.508923 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:08.893774 master-0 kubenswrapper[4090]: I0309 16:25:08.893380 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} Mar 09 16:25:09.508559 master-0 kubenswrapper[4090]: I0309 16:25:09.508491 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:09.508559 master-0 kubenswrapper[4090]: I0309 16:25:09.508507 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:09.508827 master-0 kubenswrapper[4090]: E0309 16:25:09.508669 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:09.508827 master-0 kubenswrapper[4090]: E0309 16:25:09.508750 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:10.905502 master-0 kubenswrapper[4090]: I0309 16:25:10.903631 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerStarted","Data":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} Mar 09 16:25:10.905502 master-0 kubenswrapper[4090]: I0309 16:25:10.904030 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:10.905502 master-0 kubenswrapper[4090]: I0309 16:25:10.904148 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:10.905502 master-0 kubenswrapper[4090]: I0309 16:25:10.904205 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:10.925541 master-0 kubenswrapper[4090]: I0309 16:25:10.925376 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:10.925825 master-0 kubenswrapper[4090]: I0309 16:25:10.925791 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:10.963312 master-0 kubenswrapper[4090]: I0309 16:25:10.963220 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=4.963202326 podStartE2EDuration="4.963202326s" podCreationTimestamp="2026-03-09 16:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:25:06.91666859 +0000 UTC m=+100.091983579" watchObservedRunningTime="2026-03-09 16:25:10.963202326 +0000 UTC m=+104.138517315" Mar 09 16:25:10.983619 master-0 kubenswrapper[4090]: I0309 16:25:10.983528 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podStartSLOduration=8.048337283 podStartE2EDuration="27.983503388s" podCreationTimestamp="2026-03-09 16:24:43 +0000 UTC" firstStartedPulling="2026-03-09 16:24:44.068319063 +0000 UTC m=+77.243634042" lastFinishedPulling="2026-03-09 16:25:04.003485158 +0000 UTC m=+97.178800147" observedRunningTime="2026-03-09 16:25:10.962345871 +0000 UTC m=+104.137660850" watchObservedRunningTime="2026-03-09 16:25:10.983503388 +0000 UTC m=+104.158818377" Mar 09 16:25:11.508241 master-0 kubenswrapper[4090]: I0309 16:25:11.508107 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:11.508640 master-0 kubenswrapper[4090]: I0309 16:25:11.508252 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:11.508640 master-0 kubenswrapper[4090]: E0309 16:25:11.508355 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:11.508640 master-0 kubenswrapper[4090]: E0309 16:25:11.508514 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:11.569737 master-0 kubenswrapper[4090]: I0309 16:25:11.569684 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 09 16:25:11.576942 master-0 kubenswrapper[4090]: I0309 16:25:11.576891 4090 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdrtp"] Mar 09 16:25:12.913608 master-0 kubenswrapper[4090]: I0309 16:25:12.913464 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-controller" containerID="cri-o://275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" gracePeriod=30 Mar 09 16:25:12.913608 master-0 kubenswrapper[4090]: I0309 16:25:12.913472 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="nbdb" containerID="cri-o://37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" gracePeriod=30 Mar 09 16:25:12.913608 master-0 kubenswrapper[4090]: I0309 16:25:12.913588 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-acl-logging" containerID="cri-o://e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" gracePeriod=30 Mar 09 16:25:12.913608 master-0 kubenswrapper[4090]: I0309 16:25:12.913575 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-node" containerID="cri-o://fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" gracePeriod=30 Mar 09 16:25:12.915269 master-0 kubenswrapper[4090]: I0309 16:25:12.913549 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" gracePeriod=30 Mar 09 16:25:12.915269 master-0 kubenswrapper[4090]: I0309 16:25:12.913555 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="northd" containerID="cri-o://3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" gracePeriod=30 Mar 09 16:25:12.915269 master-0 kubenswrapper[4090]: I0309 16:25:12.913638 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="sbdb" containerID="cri-o://f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" gracePeriod=30 Mar 09 16:25:12.932087 master-0 kubenswrapper[4090]: I0309 16:25:12.932028 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovnkube-controller" containerID="cri-o://2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" gracePeriod=30 Mar 09 16:25:12.988768 master-0 kubenswrapper[4090]: I0309 16:25:12.988585 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-ncskk"] Mar 09 16:25:12.988768 master-0 kubenswrapper[4090]: I0309 16:25:12.988696 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:12.988768 master-0 kubenswrapper[4090]: E0309 16:25:12.988759 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:12.989963 master-0 kubenswrapper[4090]: I0309 16:25:12.989860 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n7slb"] Mar 09 16:25:12.990239 master-0 kubenswrapper[4090]: I0309 16:25:12.990020 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:12.990239 master-0 kubenswrapper[4090]: E0309 16:25:12.990157 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:13.894535 master-0 kubenswrapper[4090]: I0309 16:25:13.894513 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/ovnkube-controller/0.log" Mar 09 16:25:13.896869 master-0 kubenswrapper[4090]: I0309 16:25:13.896835 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/kube-rbac-proxy-ovn-metrics/0.log" Mar 09 16:25:13.897539 master-0 kubenswrapper[4090]: I0309 16:25:13.897510 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/kube-rbac-proxy-node/0.log" Mar 09 16:25:13.898035 master-0 kubenswrapper[4090]: I0309 16:25:13.898022 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/ovn-acl-logging/0.log" Mar 09 16:25:13.898649 master-0 kubenswrapper[4090]: I0309 16:25:13.898627 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/ovn-controller/0.log" Mar 09 16:25:13.899471 master-0 kubenswrapper[4090]: I0309 16:25:13.899456 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:13.918130 master-0 kubenswrapper[4090]: I0309 16:25:13.918073 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/ovnkube-controller/0.log" Mar 09 16:25:13.920148 master-0 kubenswrapper[4090]: I0309 16:25:13.920079 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/kube-rbac-proxy-ovn-metrics/0.log" Mar 09 16:25:13.920759 master-0 kubenswrapper[4090]: I0309 16:25:13.920712 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/kube-rbac-proxy-node/0.log" Mar 09 16:25:13.921229 master-0 kubenswrapper[4090]: I0309 16:25:13.921192 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/ovn-acl-logging/0.log" Mar 09 16:25:13.921843 master-0 kubenswrapper[4090]: I0309 16:25:13.921822 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gdrtp_2a584527-bd42-4982-91c8-8f4c833dbfb5/ovn-controller/0.log" Mar 09 16:25:13.922454 master-0 kubenswrapper[4090]: I0309 16:25:13.922406 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" exitCode=2 Mar 09 16:25:13.922454 master-0 kubenswrapper[4090]: I0309 16:25:13.922453 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" exitCode=0 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922466 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" exitCode=0 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922474 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" exitCode=0 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922482 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" exitCode=143 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922490 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" exitCode=143 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922497 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" exitCode=143 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922505 4090 generic.go:334] "Generic (PLEG): container finished" podID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" exitCode=143 Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922529 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922546 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922561 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922577 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922590 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922602 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} Mar 09 16:25:13.922642 master-0 kubenswrapper[4090]: I0309 16:25:13.922614 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922626 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922758 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922767 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922769 4090 scope.go:117] "RemoveContainer" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922777 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922883 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922893 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922902 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922909 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922916 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922922 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922929 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922936 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922942 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922953 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922963 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922968 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922973 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922979 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922985 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922991 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.922997 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.923003 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.923009 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.923019 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gdrtp" event={"ID":"2a584527-bd42-4982-91c8-8f4c833dbfb5","Type":"ContainerDied","Data":"dea16b36ad7acaaf470238d84702fe60241154a57b0bc52f42d559e22bc82e3c"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.923028 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.923036 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} Mar 09 16:25:13.923507 master-0 kubenswrapper[4090]: I0309 16:25:13.923044 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} Mar 09 16:25:13.925450 master-0 kubenswrapper[4090]: I0309 16:25:13.923050 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} Mar 09 16:25:13.925450 master-0 kubenswrapper[4090]: I0309 16:25:13.923057 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} Mar 09 16:25:13.925450 master-0 kubenswrapper[4090]: I0309 16:25:13.923062 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} Mar 09 16:25:13.925450 master-0 kubenswrapper[4090]: I0309 16:25:13.923067 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} Mar 09 16:25:13.925450 master-0 kubenswrapper[4090]: I0309 16:25:13.923072 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} Mar 09 16:25:13.925450 master-0 kubenswrapper[4090]: I0309 16:25:13.923077 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} Mar 09 16:25:13.943805 master-0 kubenswrapper[4090]: I0309 16:25:13.943759 4090 scope.go:117] "RemoveContainer" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:13.952577 master-0 kubenswrapper[4090]: I0309 16:25:13.952530 4090 scope.go:117] "RemoveContainer" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:13.961761 master-0 kubenswrapper[4090]: I0309 16:25:13.961658 4090 scope.go:117] "RemoveContainer" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:13.967181 master-0 kubenswrapper[4090]: I0309 16:25:13.967097 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=2.9670763300000003 podStartE2EDuration="2.96707633s" podCreationTimestamp="2026-03-09 16:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:25:13.010893155 +0000 UTC m=+106.186208144" watchObservedRunningTime="2026-03-09 16:25:13.96707633 +0000 UTC m=+107.142391329" Mar 09 16:25:13.981274 master-0 kubenswrapper[4090]: I0309 16:25:13.975908 4090 scope.go:117] "RemoveContainer" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:13.983850 master-0 kubenswrapper[4090]: I0309 16:25:13.983810 4090 scope.go:117] "RemoveContainer" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:13.993131 master-0 kubenswrapper[4090]: I0309 16:25:13.993086 4090 scope.go:117] "RemoveContainer" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" Mar 09 16:25:14.005208 master-0 kubenswrapper[4090]: I0309 16:25:14.005149 4090 scope.go:117] "RemoveContainer" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" Mar 09 16:25:14.016873 master-0 kubenswrapper[4090]: I0309 16:25:14.016820 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-var-lib-openvswitch\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.016873 master-0 kubenswrapper[4090]: I0309 16:25:14.016864 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-config\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.016873 master-0 kubenswrapper[4090]: I0309 16:25:14.016889 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fgsn\" (UniqueName: \"kubernetes.io/projected/2a584527-bd42-4982-91c8-8f4c833dbfb5-kube-api-access-2fgsn\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016903 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-netd\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016915 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-bin\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016917 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016934 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-ovn\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016950 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-env-overrides\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016964 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-netns\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016978 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-node-log\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016994 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-systemd-units\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017017 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017038 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-kubelet\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017061 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovn-node-metrics-cert\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016961 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.016983 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017001 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017059 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-node-log" (OuterVolumeSpecName: "node-log") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017085 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-openvswitch\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.018092 master-0 kubenswrapper[4090]: I0309 16:25:14.017081 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017130 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017111 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017130 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-ovn-kubernetes\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017153 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017163 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017168 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-log-socket\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017190 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-etc-openvswitch\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017213 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-systemd\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017188 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017210 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-log-socket" (OuterVolumeSpecName: "log-socket") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017229 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017235 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-slash\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017257 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-slash" (OuterVolumeSpecName: "host-slash") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017306 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-script-lib\") pod \"2a584527-bd42-4982-91c8-8f4c833dbfb5\" (UID: \"2a584527-bd42-4982-91c8-8f4c833dbfb5\") " Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017350 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:25:14.019209 master-0 kubenswrapper[4090]: I0309 16:25:14.017440 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017460 4090 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017473 4090 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017481 4090 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017489 4090 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017498 4090 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017506 4090 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017513 4090 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-node-log\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017522 4090 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017531 4090 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017539 4090 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017547 4090 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017556 4090 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017570 4090 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017579 4090 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017588 4090 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017857 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:25:14.019753 master-0 kubenswrapper[4090]: I0309 16:25:14.017893 4090 scope.go:117] "RemoveContainer" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" Mar 09 16:25:14.020836 master-0 kubenswrapper[4090]: I0309 16:25:14.020791 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:25:14.020908 master-0 kubenswrapper[4090]: I0309 16:25:14.020791 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a584527-bd42-4982-91c8-8f4c833dbfb5-kube-api-access-2fgsn" (OuterVolumeSpecName: "kube-api-access-2fgsn") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "kube-api-access-2fgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:25:14.024117 master-0 kubenswrapper[4090]: I0309 16:25:14.024016 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2a584527-bd42-4982-91c8-8f4c833dbfb5" (UID: "2a584527-bd42-4982-91c8-8f4c833dbfb5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:25:14.032938 master-0 kubenswrapper[4090]: I0309 16:25:14.032890 4090 scope.go:117] "RemoveContainer" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:14.033505 master-0 kubenswrapper[4090]: E0309 16:25:14.033458 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": container with ID starting with 2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231 not found: ID does not exist" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:14.033575 master-0 kubenswrapper[4090]: I0309 16:25:14.033512 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} err="failed to get container status \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": rpc error: code = NotFound desc = could not find container \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": container with ID starting with 2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231 not found: ID does not exist" Mar 09 16:25:14.033575 master-0 kubenswrapper[4090]: I0309 16:25:14.033550 4090 scope.go:117] "RemoveContainer" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:14.034000 master-0 kubenswrapper[4090]: E0309 16:25:14.033946 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": container with ID starting with f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705 not found: ID does not exist" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:14.034053 master-0 kubenswrapper[4090]: I0309 16:25:14.034003 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} err="failed to get container status \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": rpc error: code = NotFound desc = could not find container \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": container with ID starting with f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705 not found: ID does not exist" Mar 09 16:25:14.034053 master-0 kubenswrapper[4090]: I0309 16:25:14.034043 4090 scope.go:117] "RemoveContainer" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:14.034455 master-0 kubenswrapper[4090]: E0309 16:25:14.034402 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": container with ID starting with 37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4 not found: ID does not exist" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:14.034511 master-0 kubenswrapper[4090]: I0309 16:25:14.034448 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} err="failed to get container status \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": rpc error: code = NotFound desc = could not find container \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": container with ID starting with 37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4 not found: ID does not exist" Mar 09 16:25:14.034511 master-0 kubenswrapper[4090]: I0309 16:25:14.034468 4090 scope.go:117] "RemoveContainer" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:14.034866 master-0 kubenswrapper[4090]: E0309 16:25:14.034831 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": container with ID starting with 3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb not found: ID does not exist" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:14.034866 master-0 kubenswrapper[4090]: I0309 16:25:14.034857 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} err="failed to get container status \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": rpc error: code = NotFound desc = could not find container \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": container with ID starting with 3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb not found: ID does not exist" Mar 09 16:25:14.034947 master-0 kubenswrapper[4090]: I0309 16:25:14.034870 4090 scope.go:117] "RemoveContainer" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:14.035355 master-0 kubenswrapper[4090]: E0309 16:25:14.035312 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": container with ID starting with 85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522 not found: ID does not exist" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:14.035406 master-0 kubenswrapper[4090]: I0309 16:25:14.035355 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} err="failed to get container status \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": rpc error: code = NotFound desc = could not find container \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": container with ID starting with 85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522 not found: ID does not exist" Mar 09 16:25:14.035406 master-0 kubenswrapper[4090]: I0309 16:25:14.035382 4090 scope.go:117] "RemoveContainer" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:14.035699 master-0 kubenswrapper[4090]: E0309 16:25:14.035669 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": container with ID starting with fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d not found: ID does not exist" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:14.035699 master-0 kubenswrapper[4090]: I0309 16:25:14.035689 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} err="failed to get container status \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": rpc error: code = NotFound desc = could not find container \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": container with ID starting with fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d not found: ID does not exist" Mar 09 16:25:14.035783 master-0 kubenswrapper[4090]: I0309 16:25:14.035703 4090 scope.go:117] "RemoveContainer" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" Mar 09 16:25:14.036036 master-0 kubenswrapper[4090]: E0309 16:25:14.035992 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": container with ID starting with e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab not found: ID does not exist" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" Mar 09 16:25:14.036083 master-0 kubenswrapper[4090]: I0309 16:25:14.036029 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} err="failed to get container status \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": rpc error: code = NotFound desc = could not find container \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": container with ID starting with e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab not found: ID does not exist" Mar 09 16:25:14.036083 master-0 kubenswrapper[4090]: I0309 16:25:14.036050 4090 scope.go:117] "RemoveContainer" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" Mar 09 16:25:14.036467 master-0 kubenswrapper[4090]: E0309 16:25:14.036413 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": container with ID starting with 275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a not found: ID does not exist" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" Mar 09 16:25:14.036467 master-0 kubenswrapper[4090]: I0309 16:25:14.036455 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} err="failed to get container status \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": rpc error: code = NotFound desc = could not find container \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": container with ID starting with 275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a not found: ID does not exist" Mar 09 16:25:14.036558 master-0 kubenswrapper[4090]: I0309 16:25:14.036471 4090 scope.go:117] "RemoveContainer" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" Mar 09 16:25:14.036718 master-0 kubenswrapper[4090]: E0309 16:25:14.036683 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": container with ID starting with aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de not found: ID does not exist" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" Mar 09 16:25:14.036718 master-0 kubenswrapper[4090]: I0309 16:25:14.036709 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} err="failed to get container status \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": rpc error: code = NotFound desc = could not find container \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": container with ID starting with aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de not found: ID does not exist" Mar 09 16:25:14.036801 master-0 kubenswrapper[4090]: I0309 16:25:14.036725 4090 scope.go:117] "RemoveContainer" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:14.036923 master-0 kubenswrapper[4090]: I0309 16:25:14.036890 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} err="failed to get container status \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": rpc error: code = NotFound desc = could not find container \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": container with ID starting with 2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231 not found: ID does not exist" Mar 09 16:25:14.036923 master-0 kubenswrapper[4090]: I0309 16:25:14.036913 4090 scope.go:117] "RemoveContainer" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:14.037281 master-0 kubenswrapper[4090]: I0309 16:25:14.037236 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} err="failed to get container status \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": rpc error: code = NotFound desc = could not find container \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": container with ID starting with f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705 not found: ID does not exist" Mar 09 16:25:14.037281 master-0 kubenswrapper[4090]: I0309 16:25:14.037270 4090 scope.go:117] "RemoveContainer" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:14.037808 master-0 kubenswrapper[4090]: I0309 16:25:14.037772 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} err="failed to get container status \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": rpc error: code = NotFound desc = could not find container \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": container with ID starting with 37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4 not found: ID does not exist" Mar 09 16:25:14.037808 master-0 kubenswrapper[4090]: I0309 16:25:14.037795 4090 scope.go:117] "RemoveContainer" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:14.038314 master-0 kubenswrapper[4090]: I0309 16:25:14.038230 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} err="failed to get container status \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": rpc error: code = NotFound desc = could not find container \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": container with ID starting with 3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb not found: ID does not exist" Mar 09 16:25:14.038314 master-0 kubenswrapper[4090]: I0309 16:25:14.038281 4090 scope.go:117] "RemoveContainer" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:14.038654 master-0 kubenswrapper[4090]: I0309 16:25:14.038623 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} err="failed to get container status \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": rpc error: code = NotFound desc = could not find container \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": container with ID starting with 85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522 not found: ID does not exist" Mar 09 16:25:14.038654 master-0 kubenswrapper[4090]: I0309 16:25:14.038642 4090 scope.go:117] "RemoveContainer" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:14.038955 master-0 kubenswrapper[4090]: I0309 16:25:14.038902 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} err="failed to get container status \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": rpc error: code = NotFound desc = could not find container \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": container with ID starting with fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d not found: ID does not exist" Mar 09 16:25:14.039013 master-0 kubenswrapper[4090]: I0309 16:25:14.038946 4090 scope.go:117] "RemoveContainer" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" Mar 09 16:25:14.039450 master-0 kubenswrapper[4090]: I0309 16:25:14.039397 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} err="failed to get container status \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": rpc error: code = NotFound desc = could not find container \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": container with ID starting with e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab not found: ID does not exist" Mar 09 16:25:14.039450 master-0 kubenswrapper[4090]: I0309 16:25:14.039429 4090 scope.go:117] "RemoveContainer" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" Mar 09 16:25:14.039933 master-0 kubenswrapper[4090]: I0309 16:25:14.039858 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} err="failed to get container status \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": rpc error: code = NotFound desc = could not find container \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": container with ID starting with 275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a not found: ID does not exist" Mar 09 16:25:14.039933 master-0 kubenswrapper[4090]: I0309 16:25:14.039920 4090 scope.go:117] "RemoveContainer" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" Mar 09 16:25:14.040407 master-0 kubenswrapper[4090]: I0309 16:25:14.040356 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} err="failed to get container status \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": rpc error: code = NotFound desc = could not find container \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": container with ID starting with aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de not found: ID does not exist" Mar 09 16:25:14.040407 master-0 kubenswrapper[4090]: I0309 16:25:14.040399 4090 scope.go:117] "RemoveContainer" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:14.040851 master-0 kubenswrapper[4090]: I0309 16:25:14.040792 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} err="failed to get container status \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": rpc error: code = NotFound desc = could not find container \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": container with ID starting with 2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231 not found: ID does not exist" Mar 09 16:25:14.040851 master-0 kubenswrapper[4090]: I0309 16:25:14.040837 4090 scope.go:117] "RemoveContainer" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:14.041338 master-0 kubenswrapper[4090]: I0309 16:25:14.041301 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} err="failed to get container status \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": rpc error: code = NotFound desc = could not find container \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": container with ID starting with f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705 not found: ID does not exist" Mar 09 16:25:14.041338 master-0 kubenswrapper[4090]: I0309 16:25:14.041324 4090 scope.go:117] "RemoveContainer" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:14.041721 master-0 kubenswrapper[4090]: I0309 16:25:14.041665 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} err="failed to get container status \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": rpc error: code = NotFound desc = could not find container \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": container with ID starting with 37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4 not found: ID does not exist" Mar 09 16:25:14.041721 master-0 kubenswrapper[4090]: I0309 16:25:14.041706 4090 scope.go:117] "RemoveContainer" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:14.042226 master-0 kubenswrapper[4090]: I0309 16:25:14.042167 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} err="failed to get container status \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": rpc error: code = NotFound desc = could not find container \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": container with ID starting with 3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb not found: ID does not exist" Mar 09 16:25:14.042226 master-0 kubenswrapper[4090]: I0309 16:25:14.042218 4090 scope.go:117] "RemoveContainer" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:14.042666 master-0 kubenswrapper[4090]: I0309 16:25:14.042610 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} err="failed to get container status \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": rpc error: code = NotFound desc = could not find container \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": container with ID starting with 85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522 not found: ID does not exist" Mar 09 16:25:14.042666 master-0 kubenswrapper[4090]: I0309 16:25:14.042655 4090 scope.go:117] "RemoveContainer" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:14.043074 master-0 kubenswrapper[4090]: I0309 16:25:14.043027 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} err="failed to get container status \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": rpc error: code = NotFound desc = could not find container \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": container with ID starting with fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d not found: ID does not exist" Mar 09 16:25:14.043074 master-0 kubenswrapper[4090]: I0309 16:25:14.043059 4090 scope.go:117] "RemoveContainer" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" Mar 09 16:25:14.043403 master-0 kubenswrapper[4090]: I0309 16:25:14.043368 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} err="failed to get container status \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": rpc error: code = NotFound desc = could not find container \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": container with ID starting with e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab not found: ID does not exist" Mar 09 16:25:14.043403 master-0 kubenswrapper[4090]: I0309 16:25:14.043390 4090 scope.go:117] "RemoveContainer" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" Mar 09 16:25:14.043704 master-0 kubenswrapper[4090]: I0309 16:25:14.043665 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} err="failed to get container status \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": rpc error: code = NotFound desc = could not find container \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": container with ID starting with 275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a not found: ID does not exist" Mar 09 16:25:14.043704 master-0 kubenswrapper[4090]: I0309 16:25:14.043691 4090 scope.go:117] "RemoveContainer" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" Mar 09 16:25:14.043978 master-0 kubenswrapper[4090]: I0309 16:25:14.043936 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} err="failed to get container status \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": rpc error: code = NotFound desc = could not find container \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": container with ID starting with aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de not found: ID does not exist" Mar 09 16:25:14.043978 master-0 kubenswrapper[4090]: I0309 16:25:14.043963 4090 scope.go:117] "RemoveContainer" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:14.044252 master-0 kubenswrapper[4090]: I0309 16:25:14.044206 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} err="failed to get container status \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": rpc error: code = NotFound desc = could not find container \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": container with ID starting with 2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231 not found: ID does not exist" Mar 09 16:25:14.044252 master-0 kubenswrapper[4090]: I0309 16:25:14.044237 4090 scope.go:117] "RemoveContainer" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:14.044507 master-0 kubenswrapper[4090]: I0309 16:25:14.044483 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} err="failed to get container status \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": rpc error: code = NotFound desc = could not find container \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": container with ID starting with f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705 not found: ID does not exist" Mar 09 16:25:14.044507 master-0 kubenswrapper[4090]: I0309 16:25:14.044502 4090 scope.go:117] "RemoveContainer" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:14.044788 master-0 kubenswrapper[4090]: I0309 16:25:14.044743 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} err="failed to get container status \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": rpc error: code = NotFound desc = could not find container \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": container with ID starting with 37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4 not found: ID does not exist" Mar 09 16:25:14.044788 master-0 kubenswrapper[4090]: I0309 16:25:14.044778 4090 scope.go:117] "RemoveContainer" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:14.045156 master-0 kubenswrapper[4090]: I0309 16:25:14.045122 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} err="failed to get container status \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": rpc error: code = NotFound desc = could not find container \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": container with ID starting with 3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb not found: ID does not exist" Mar 09 16:25:14.045156 master-0 kubenswrapper[4090]: I0309 16:25:14.045143 4090 scope.go:117] "RemoveContainer" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:14.045449 master-0 kubenswrapper[4090]: I0309 16:25:14.045409 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} err="failed to get container status \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": rpc error: code = NotFound desc = could not find container \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": container with ID starting with 85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522 not found: ID does not exist" Mar 09 16:25:14.045449 master-0 kubenswrapper[4090]: I0309 16:25:14.045445 4090 scope.go:117] "RemoveContainer" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:14.045768 master-0 kubenswrapper[4090]: I0309 16:25:14.045724 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} err="failed to get container status \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": rpc error: code = NotFound desc = could not find container \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": container with ID starting with fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d not found: ID does not exist" Mar 09 16:25:14.045768 master-0 kubenswrapper[4090]: I0309 16:25:14.045751 4090 scope.go:117] "RemoveContainer" containerID="e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab" Mar 09 16:25:14.046062 master-0 kubenswrapper[4090]: I0309 16:25:14.046019 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab"} err="failed to get container status \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": rpc error: code = NotFound desc = could not find container \"e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab\": container with ID starting with e48442b2a39c8d3922e85d96e0ddc4e3c15a59d19db701aeb8b709909f33ebab not found: ID does not exist" Mar 09 16:25:14.046062 master-0 kubenswrapper[4090]: I0309 16:25:14.046050 4090 scope.go:117] "RemoveContainer" containerID="275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a" Mar 09 16:25:14.046384 master-0 kubenswrapper[4090]: I0309 16:25:14.046345 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a"} err="failed to get container status \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": rpc error: code = NotFound desc = could not find container \"275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a\": container with ID starting with 275b6f1eb5ba90c465ee957ce57d2bf05c01c1fe79f90a6592265bbb4086131a not found: ID does not exist" Mar 09 16:25:14.046384 master-0 kubenswrapper[4090]: I0309 16:25:14.046370 4090 scope.go:117] "RemoveContainer" containerID="aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de" Mar 09 16:25:14.046792 master-0 kubenswrapper[4090]: I0309 16:25:14.046717 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de"} err="failed to get container status \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": rpc error: code = NotFound desc = could not find container \"aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de\": container with ID starting with aa95b66c2d465167a5f6c09fa4c27ffb9c42ef749378a769f1071109b80a92de not found: ID does not exist" Mar 09 16:25:14.046792 master-0 kubenswrapper[4090]: I0309 16:25:14.046779 4090 scope.go:117] "RemoveContainer" containerID="2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231" Mar 09 16:25:14.047140 master-0 kubenswrapper[4090]: I0309 16:25:14.047089 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231"} err="failed to get container status \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": rpc error: code = NotFound desc = could not find container \"2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231\": container with ID starting with 2bb7c8bb6da7e08b814617c8d2952ab8b462be4c198fe85b6213425b37941231 not found: ID does not exist" Mar 09 16:25:14.047218 master-0 kubenswrapper[4090]: I0309 16:25:14.047133 4090 scope.go:117] "RemoveContainer" containerID="f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705" Mar 09 16:25:14.047577 master-0 kubenswrapper[4090]: I0309 16:25:14.047539 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705"} err="failed to get container status \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": rpc error: code = NotFound desc = could not find container \"f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705\": container with ID starting with f368582a749963c80bbbcba8de7e32e1363fb46e3079e7b03ab6a0fbe5aa2705 not found: ID does not exist" Mar 09 16:25:14.047577 master-0 kubenswrapper[4090]: I0309 16:25:14.047563 4090 scope.go:117] "RemoveContainer" containerID="37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4" Mar 09 16:25:14.047857 master-0 kubenswrapper[4090]: I0309 16:25:14.047815 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4"} err="failed to get container status \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": rpc error: code = NotFound desc = could not find container \"37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4\": container with ID starting with 37dbb75d8be1075a6e8833d6c8572f7ad8df7921a902d7e8d11b6d7dbc90a2b4 not found: ID does not exist" Mar 09 16:25:14.047857 master-0 kubenswrapper[4090]: I0309 16:25:14.047842 4090 scope.go:117] "RemoveContainer" containerID="3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb" Mar 09 16:25:14.048165 master-0 kubenswrapper[4090]: I0309 16:25:14.048124 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb"} err="failed to get container status \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": rpc error: code = NotFound desc = could not find container \"3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb\": container with ID starting with 3659c56fc5f5282530ed53df4d1de7ce938ffab0de67d67657bf62f9f29a48cb not found: ID does not exist" Mar 09 16:25:14.048165 master-0 kubenswrapper[4090]: I0309 16:25:14.048146 4090 scope.go:117] "RemoveContainer" containerID="85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522" Mar 09 16:25:14.048561 master-0 kubenswrapper[4090]: I0309 16:25:14.048517 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522"} err="failed to get container status \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": rpc error: code = NotFound desc = could not find container \"85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522\": container with ID starting with 85b26d20e0a2215d7440cb37e5a1ab4edb815fd1550e1e4203256c2b5706d522 not found: ID does not exist" Mar 09 16:25:14.048627 master-0 kubenswrapper[4090]: I0309 16:25:14.048563 4090 scope.go:117] "RemoveContainer" containerID="fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d" Mar 09 16:25:14.048975 master-0 kubenswrapper[4090]: I0309 16:25:14.048934 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d"} err="failed to get container status \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": rpc error: code = NotFound desc = could not find container \"fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d\": container with ID starting with fb43e5778eb8dfe86c13672013846dd1020723abbfcc0daff5d9321c59f35e3d not found: ID does not exist" Mar 09 16:25:14.118531 master-0 kubenswrapper[4090]: I0309 16:25:14.118410 4090 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.118531 master-0 kubenswrapper[4090]: I0309 16:25:14.118496 4090 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.118531 master-0 kubenswrapper[4090]: I0309 16:25:14.118510 4090 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2a584527-bd42-4982-91c8-8f4c833dbfb5-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.118531 master-0 kubenswrapper[4090]: I0309 16:25:14.118520 4090 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2a584527-bd42-4982-91c8-8f4c833dbfb5-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.118531 master-0 kubenswrapper[4090]: I0309 16:25:14.118530 4090 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fgsn\" (UniqueName: \"kubernetes.io/projected/2a584527-bd42-4982-91c8-8f4c833dbfb5-kube-api-access-2fgsn\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:14.449375 master-0 kubenswrapper[4090]: I0309 16:25:14.449305 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vwgwh"] Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449437 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kubecfg-setup" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449450 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kubecfg-setup" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449458 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-node" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449464 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-node" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449471 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-acl-logging" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449476 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-acl-logging" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449482 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="sbdb" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449490 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="sbdb" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449496 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovnkube-controller" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449501 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovnkube-controller" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449508 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-ovn-metrics" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449514 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-ovn-metrics" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449522 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-controller" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449527 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-controller" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449535 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="northd" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449542 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="northd" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: E0309 16:25:14.449549 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="nbdb" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449556 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="nbdb" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449590 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovnkube-controller" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449597 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-node" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449603 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="kube-rbac-proxy-ovn-metrics" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449609 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="sbdb" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449617 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-controller" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449623 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="ovn-acl-logging" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449628 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="northd" Mar 09 16:25:14.449684 master-0 kubenswrapper[4090]: I0309 16:25:14.449634 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" containerName="nbdb" Mar 09 16:25:14.450883 master-0 kubenswrapper[4090]: I0309 16:25:14.450405 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.455921 master-0 kubenswrapper[4090]: I0309 16:25:14.455812 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 16:25:14.456594 master-0 kubenswrapper[4090]: I0309 16:25:14.456562 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 16:25:14.508220 master-0 kubenswrapper[4090]: I0309 16:25:14.508084 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:14.508220 master-0 kubenswrapper[4090]: I0309 16:25:14.508124 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:14.508460 master-0 kubenswrapper[4090]: E0309 16:25:14.508232 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:14.508460 master-0 kubenswrapper[4090]: E0309 16:25:14.508302 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:14.521487 master-0 kubenswrapper[4090]: I0309 16:25:14.521408 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521487 master-0 kubenswrapper[4090]: I0309 16:25:14.521469 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521850 master-0 kubenswrapper[4090]: I0309 16:25:14.521502 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521850 master-0 kubenswrapper[4090]: I0309 16:25:14.521559 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521850 master-0 kubenswrapper[4090]: I0309 16:25:14.521580 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521850 master-0 kubenswrapper[4090]: I0309 16:25:14.521780 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521850 master-0 kubenswrapper[4090]: I0309 16:25:14.521823 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.521850 master-0 kubenswrapper[4090]: I0309 16:25:14.521848 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.521871 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.521892 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.521935 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.521966 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.521985 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522005 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522066 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522106 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522145 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522167 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522188 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.522342 master-0 kubenswrapper[4090]: I0309 16:25:14.522216 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.523081 master-0 kubenswrapper[4090]: I0309 16:25:14.523029 4090 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdrtp"] Mar 09 16:25:14.528365 master-0 kubenswrapper[4090]: I0309 16:25:14.528311 4090 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gdrtp"] Mar 09 16:25:14.622949 master-0 kubenswrapper[4090]: I0309 16:25:14.622861 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.622949 master-0 kubenswrapper[4090]: I0309 16:25:14.622934 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623227 master-0 kubenswrapper[4090]: I0309 16:25:14.622962 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623227 master-0 kubenswrapper[4090]: I0309 16:25:14.623033 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623227 master-0 kubenswrapper[4090]: I0309 16:25:14.623086 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623227 master-0 kubenswrapper[4090]: I0309 16:25:14.623111 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623227 master-0 kubenswrapper[4090]: I0309 16:25:14.623135 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623227 master-0 kubenswrapper[4090]: I0309 16:25:14.623157 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623266 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623324 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623356 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623363 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623377 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623404 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623477 master-0 kubenswrapper[4090]: I0309 16:25:14.623448 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623743 master-0 kubenswrapper[4090]: I0309 16:25:14.623485 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623743 master-0 kubenswrapper[4090]: I0309 16:25:14.623509 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623743 master-0 kubenswrapper[4090]: I0309 16:25:14.623514 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623743 master-0 kubenswrapper[4090]: I0309 16:25:14.623533 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623743 master-0 kubenswrapper[4090]: I0309 16:25:14.623559 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623743 master-0 kubenswrapper[4090]: I0309 16:25:14.623659 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.623992 master-0 kubenswrapper[4090]: I0309 16:25:14.623954 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624081 master-0 kubenswrapper[4090]: I0309 16:25:14.624034 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624212 master-0 kubenswrapper[4090]: I0309 16:25:14.624182 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624253 master-0 kubenswrapper[4090]: I0309 16:25:14.624238 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624292 master-0 kubenswrapper[4090]: I0309 16:25:14.624268 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624329 master-0 kubenswrapper[4090]: I0309 16:25:14.624314 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624365 master-0 kubenswrapper[4090]: I0309 16:25:14.624345 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624403 master-0 kubenswrapper[4090]: I0309 16:25:14.624377 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624403 master-0 kubenswrapper[4090]: I0309 16:25:14.624398 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624508 master-0 kubenswrapper[4090]: I0309 16:25:14.624442 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624508 master-0 kubenswrapper[4090]: I0309 16:25:14.624321 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624590 master-0 kubenswrapper[4090]: I0309 16:25:14.624532 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624590 master-0 kubenswrapper[4090]: I0309 16:25:14.624549 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.624676 master-0 kubenswrapper[4090]: I0309 16:25:14.624625 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.625017 master-0 kubenswrapper[4090]: I0309 16:25:14.624985 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.625056 master-0 kubenswrapper[4090]: I0309 16:25:14.625037 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.625087 master-0 kubenswrapper[4090]: I0309 16:25:14.625079 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.627635 master-0 kubenswrapper[4090]: I0309 16:25:14.627591 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.645029 master-0 kubenswrapper[4090]: I0309 16:25:14.644953 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.766790 master-0 kubenswrapper[4090]: I0309 16:25:14.766610 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:14.776910 master-0 kubenswrapper[4090]: W0309 16:25:14.776860 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d47955b_b85c_4137_9dea_ff0c20d5ab77.slice/crio-48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec WatchSource:0}: Error finding container 48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec: Status 404 returned error can't find the container with id 48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec Mar 09 16:25:14.926308 master-0 kubenswrapper[4090]: I0309 16:25:14.926257 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec"} Mar 09 16:25:15.512721 master-0 kubenswrapper[4090]: I0309 16:25:15.512643 4090 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a584527-bd42-4982-91c8-8f4c833dbfb5" path="/var/lib/kubelet/pods/2a584527-bd42-4982-91c8-8f4c833dbfb5/volumes" Mar 09 16:25:15.930245 master-0 kubenswrapper[4090]: I0309 16:25:15.930051 4090 generic.go:334] "Generic (PLEG): container finished" podID="6d47955b-b85c-4137-9dea-ff0c20d5ab77" containerID="c0b6c146623a62ab0a5823c85168f8b6cd4a93ec0368a37111e0616c32e8f226" exitCode=0 Mar 09 16:25:15.930245 master-0 kubenswrapper[4090]: I0309 16:25:15.930110 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerDied","Data":"c0b6c146623a62ab0a5823c85168f8b6cd4a93ec0368a37111e0616c32e8f226"} Mar 09 16:25:16.508824 master-0 kubenswrapper[4090]: I0309 16:25:16.508265 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:16.509013 master-0 kubenswrapper[4090]: I0309 16:25:16.508299 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:16.509271 master-0 kubenswrapper[4090]: E0309 16:25:16.508957 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:16.509346 master-0 kubenswrapper[4090]: E0309 16:25:16.509225 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:16.936159 master-0 kubenswrapper[4090]: I0309 16:25:16.936109 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"f3de4cdc913da604f3bcdf7c6db71add3b4524e7edeaad18980e64afab713dfe"} Mar 09 16:25:16.936159 master-0 kubenswrapper[4090]: I0309 16:25:16.936153 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"a27fe367b3bb9a6acc9a26e103fe3fc0c1392b2031595f94beeda326d0ef6c1b"} Mar 09 16:25:16.936665 master-0 kubenswrapper[4090]: I0309 16:25:16.936178 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"b6555474c4b3b60f6d94d6527b6afe3d045a50503dadb76babf875309aafb505"} Mar 09 16:25:16.936665 master-0 kubenswrapper[4090]: I0309 16:25:16.936189 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"8d0e88526140f73f2654c1fe529efb11dcee4dbea4886b7406b11eb8fc5b5860"} Mar 09 16:25:16.936665 master-0 kubenswrapper[4090]: I0309 16:25:16.936199 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"c45f82b16a30de465834230204fd9531817387a12eb0481457c0eec66e3d7bb9"} Mar 09 16:25:16.936665 master-0 kubenswrapper[4090]: I0309 16:25:16.936207 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"431e74bd88e32e968a7538312ad865cb09d8e77b806bcc492b750a79a8aa4692"} Mar 09 16:25:17.347999 master-0 kubenswrapper[4090]: I0309 16:25:17.347919 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:17.348300 master-0 kubenswrapper[4090]: E0309 16:25:17.348076 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:17.348300 master-0 kubenswrapper[4090]: E0309 16:25:17.348139 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:21.34812125 +0000 UTC m=+174.523436239 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:18.507647 master-0 kubenswrapper[4090]: I0309 16:25:18.507558 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:18.507647 master-0 kubenswrapper[4090]: I0309 16:25:18.507655 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:18.508278 master-0 kubenswrapper[4090]: E0309 16:25:18.507705 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:18.508278 master-0 kubenswrapper[4090]: E0309 16:25:18.507773 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:19.951155 master-0 kubenswrapper[4090]: I0309 16:25:19.951082 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"acfcf59c4deac17bb4b5fd9d4652fe7d0ce06622e9f9e14cb564752b166ffa10"} Mar 09 16:25:20.508176 master-0 kubenswrapper[4090]: I0309 16:25:20.507987 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:20.508176 master-0 kubenswrapper[4090]: I0309 16:25:20.508065 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:20.508534 master-0 kubenswrapper[4090]: E0309 16:25:20.508204 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:20.508534 master-0 kubenswrapper[4090]: E0309 16:25:20.508327 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:21.966176 master-0 kubenswrapper[4090]: I0309 16:25:21.965716 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" event={"ID":"6d47955b-b85c-4137-9dea-ff0c20d5ab77","Type":"ContainerStarted","Data":"0e0416b2ab5e853c39b450244e3240dcfc13139f2c13bd11a4154f60cccfc954"} Mar 09 16:25:21.972229 master-0 kubenswrapper[4090]: I0309 16:25:21.967709 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:21.972229 master-0 kubenswrapper[4090]: I0309 16:25:21.967784 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:21.989502 master-0 kubenswrapper[4090]: I0309 16:25:21.989282 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:21.989502 master-0 kubenswrapper[4090]: E0309 16:25:21.989483 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 09 16:25:21.989502 master-0 kubenswrapper[4090]: E0309 16:25:21.989505 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 09 16:25:21.989502 master-0 kubenswrapper[4090]: E0309 16:25:21.989517 4090 projected.go:194] Error preparing data for projected volume kube-api-access-cm4ff for pod openshift-network-diagnostics/network-check-target-ncskk: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:25:21.990873 master-0 kubenswrapper[4090]: E0309 16:25:21.989575 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff podName:7937ccab-a6fb-4401-a4fd-7a2a91a7193f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:53.989557975 +0000 UTC m=+147.164872974 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cm4ff" (UniqueName: "kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff") pod "network-check-target-ncskk" (UID: "7937ccab-a6fb-4401-a4fd-7a2a91a7193f") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 09 16:25:21.995739 master-0 kubenswrapper[4090]: I0309 16:25:21.995684 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:22.028785 master-0 kubenswrapper[4090]: I0309 16:25:22.028676 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" podStartSLOduration=8.028649763 podStartE2EDuration="8.028649763s" podCreationTimestamp="2026-03-09 16:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:25:21.999075452 +0000 UTC m=+115.174390491" watchObservedRunningTime="2026-03-09 16:25:22.028649763 +0000 UTC m=+115.203964772" Mar 09 16:25:22.508486 master-0 kubenswrapper[4090]: I0309 16:25:22.508409 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:22.508863 master-0 kubenswrapper[4090]: I0309 16:25:22.508459 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:22.508978 master-0 kubenswrapper[4090]: E0309 16:25:22.508951 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:22.509212 master-0 kubenswrapper[4090]: E0309 16:25:22.509179 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:22.968848 master-0 kubenswrapper[4090]: I0309 16:25:22.968745 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:22.992550 master-0 kubenswrapper[4090]: I0309 16:25:22.992494 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:24.508033 master-0 kubenswrapper[4090]: I0309 16:25:24.507927 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:24.509053 master-0 kubenswrapper[4090]: E0309 16:25:24.508096 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:24.509053 master-0 kubenswrapper[4090]: I0309 16:25:24.508416 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:24.509053 master-0 kubenswrapper[4090]: E0309 16:25:24.508600 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:26.508027 master-0 kubenswrapper[4090]: I0309 16:25:26.507921 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:26.508866 master-0 kubenswrapper[4090]: E0309 16:25:26.508110 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:26.508866 master-0 kubenswrapper[4090]: I0309 16:25:26.508172 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:26.508866 master-0 kubenswrapper[4090]: E0309 16:25:26.508334 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:27.434390 master-0 kubenswrapper[4090]: E0309 16:25:27.433924 4090 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 09 16:25:27.652009 master-0 kubenswrapper[4090]: E0309 16:25:27.651835 4090 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 09 16:25:28.508266 master-0 kubenswrapper[4090]: I0309 16:25:28.508193 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:28.508658 master-0 kubenswrapper[4090]: E0309 16:25:28.508477 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:28.508842 master-0 kubenswrapper[4090]: I0309 16:25:28.508193 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:28.509125 master-0 kubenswrapper[4090]: E0309 16:25:28.509087 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:30.508525 master-0 kubenswrapper[4090]: I0309 16:25:30.508381 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:30.509216 master-0 kubenswrapper[4090]: I0309 16:25:30.508437 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:30.509216 master-0 kubenswrapper[4090]: E0309 16:25:30.508579 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:30.509216 master-0 kubenswrapper[4090]: E0309 16:25:30.508816 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:32.508544 master-0 kubenswrapper[4090]: I0309 16:25:32.508418 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:32.508544 master-0 kubenswrapper[4090]: I0309 16:25:32.508526 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:32.509351 master-0 kubenswrapper[4090]: E0309 16:25:32.508599 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:25:32.509351 master-0 kubenswrapper[4090]: E0309 16:25:32.508735 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ncskk" podUID="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" Mar 09 16:25:32.797387 master-0 kubenswrapper[4090]: I0309 16:25:32.797247 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 09 16:25:33.964492 master-0 kubenswrapper[4090]: I0309 16:25:33.964409 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75"] Mar 09 16:25:33.965650 master-0 kubenswrapper[4090]: I0309 16:25:33.964829 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:33.972402 master-0 kubenswrapper[4090]: I0309 16:25:33.972314 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 09 16:25:33.972778 master-0 kubenswrapper[4090]: I0309 16:25:33.972631 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 09 16:25:33.972994 master-0 kubenswrapper[4090]: I0309 16:25:33.972899 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 09 16:25:33.975057 master-0 kubenswrapper[4090]: I0309 16:25:33.974966 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:25:34.104203 master-0 kubenswrapper[4090]: I0309 16:25:34.104089 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.104600 master-0 kubenswrapper[4090]: I0309 16:25:34.104250 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.104600 master-0 kubenswrapper[4090]: I0309 16:25:34.104299 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.205148 master-0 kubenswrapper[4090]: I0309 16:25:34.205053 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.205148 master-0 kubenswrapper[4090]: I0309 16:25:34.205099 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.205148 master-0 kubenswrapper[4090]: I0309 16:25:34.205118 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.206752 master-0 kubenswrapper[4090]: I0309 16:25:34.206507 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.211351 master-0 kubenswrapper[4090]: I0309 16:25:34.211280 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.407282 master-0 kubenswrapper[4090]: I0309 16:25:34.404544 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-xtmhw"] Mar 09 16:25:34.407282 master-0 kubenswrapper[4090]: I0309 16:25:34.405344 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.407282 master-0 kubenswrapper[4090]: I0309 16:25:34.405699 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9"] Mar 09 16:25:34.407282 master-0 kubenswrapper[4090]: I0309 16:25:34.406341 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.412865 master-0 kubenswrapper[4090]: I0309 16:25:34.410297 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.412865 master-0 kubenswrapper[4090]: I0309 16:25:34.411524 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf"] Mar 09 16:25:34.412865 master-0 kubenswrapper[4090]: I0309 16:25:34.412270 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.415757 master-0 kubenswrapper[4090]: I0309 16:25:34.414472 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt"] Mar 09 16:25:34.415757 master-0 kubenswrapper[4090]: I0309 16:25:34.415075 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.415757 master-0 kubenswrapper[4090]: I0309 16:25:34.415501 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 09 16:25:34.416108 master-0 kubenswrapper[4090]: I0309 16:25:34.415837 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.418163 master-0 kubenswrapper[4090]: I0309 16:25:34.416483 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 09 16:25:34.418163 master-0 kubenswrapper[4090]: I0309 16:25:34.416733 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 09 16:25:34.418163 master-0 kubenswrapper[4090]: I0309 16:25:34.417312 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 09 16:25:34.418163 master-0 kubenswrapper[4090]: I0309 16:25:34.417903 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 09 16:25:34.419768 master-0 kubenswrapper[4090]: I0309 16:25:34.419495 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 09 16:25:34.419768 master-0 kubenswrapper[4090]: I0309 16:25:34.419550 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 09 16:25:34.420338 master-0 kubenswrapper[4090]: I0309 16:25:34.420014 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 09 16:25:34.420338 master-0 kubenswrapper[4090]: I0309 16:25:34.420314 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.425815 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.425917 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.426097 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk"] Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.426193 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.426507 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.426545 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.426604 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.427165 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 09 16:25:34.427460 master-0 kubenswrapper[4090]: I0309 16:25:34.427247 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 09 16:25:34.428238 master-0 kubenswrapper[4090]: I0309 16:25:34.427487 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 09 16:25:34.437687 master-0 kubenswrapper[4090]: I0309 16:25:34.435101 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc"] Mar 09 16:25:34.442347 master-0 kubenswrapper[4090]: I0309 16:25:34.442287 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 09 16:25:34.442347 master-0 kubenswrapper[4090]: I0309 16:25:34.442337 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.442654 master-0 kubenswrapper[4090]: I0309 16:25:34.442574 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.443176 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.443326 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.443362 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4"] Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.443493 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.443901 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9"] Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.444344 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.444416 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.445507 master-0 kubenswrapper[4090]: I0309 16:25:34.445288 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n"] Mar 09 16:25:34.447553 master-0 kubenswrapper[4090]: I0309 16:25:34.447508 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw"] Mar 09 16:25:34.448037 master-0 kubenswrapper[4090]: I0309 16:25:34.447838 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.448123 master-0 kubenswrapper[4090]: I0309 16:25:34.448101 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj"] Mar 09 16:25:34.449183 master-0 kubenswrapper[4090]: I0309 16:25:34.448500 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.449183 master-0 kubenswrapper[4090]: I0309 16:25:34.448503 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.449183 master-0 kubenswrapper[4090]: I0309 16:25:34.448834 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 09 16:25:34.450176 master-0 kubenswrapper[4090]: I0309 16:25:34.450134 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp"] Mar 09 16:25:34.450630 master-0 kubenswrapper[4090]: I0309 16:25:34.450598 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.451527 master-0 kubenswrapper[4090]: I0309 16:25:34.451489 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.452120 master-0 kubenswrapper[4090]: I0309 16:25:34.452089 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 09 16:25:34.452502 master-0 kubenswrapper[4090]: I0309 16:25:34.452470 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 09 16:25:34.452650 master-0 kubenswrapper[4090]: I0309 16:25:34.452629 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 09 16:25:34.452897 master-0 kubenswrapper[4090]: I0309 16:25:34.452855 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.452994 master-0 kubenswrapper[4090]: I0309 16:25:34.452709 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 09 16:25:34.453078 master-0 kubenswrapper[4090]: I0309 16:25:34.453050 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x"] Mar 09 16:25:34.453264 master-0 kubenswrapper[4090]: I0309 16:25:34.453193 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.453347 master-0 kubenswrapper[4090]: I0309 16:25:34.453317 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc"] Mar 09 16:25:34.453586 master-0 kubenswrapper[4090]: I0309 16:25:34.453565 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 09 16:25:34.453692 master-0 kubenswrapper[4090]: I0309 16:25:34.453662 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.453893 master-0 kubenswrapper[4090]: I0309 16:25:34.453622 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:34.454051 master-0 kubenswrapper[4090]: I0309 16:25:34.454018 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 09 16:25:34.454131 master-0 kubenswrapper[4090]: I0309 16:25:34.454100 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 09 16:25:34.454131 master-0 kubenswrapper[4090]: I0309 16:25:34.454125 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 09 16:25:34.454242 master-0 kubenswrapper[4090]: I0309 16:25:34.454217 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 09 16:25:34.454379 master-0 kubenswrapper[4090]: I0309 16:25:34.454349 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 09 16:25:34.454512 master-0 kubenswrapper[4090]: I0309 16:25:34.454485 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 09 16:25:34.454597 master-0 kubenswrapper[4090]: I0309 16:25:34.454526 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:25:34.454660 master-0 kubenswrapper[4090]: I0309 16:25:34.454597 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 09 16:25:34.454716 master-0 kubenswrapper[4090]: I0309 16:25:34.454696 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 09 16:25:34.454865 master-0 kubenswrapper[4090]: I0309 16:25:34.454830 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 09 16:25:34.455073 master-0 kubenswrapper[4090]: I0309 16:25:34.455046 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 09 16:25:34.455222 master-0 kubenswrapper[4090]: I0309 16:25:34.455189 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 09 16:25:34.455614 master-0 kubenswrapper[4090]: I0309 16:25:34.455533 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 09 16:25:34.455758 master-0 kubenswrapper[4090]: I0309 16:25:34.455721 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5"] Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.456276 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.456668 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.456743 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.456797 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78"] Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.457366 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.457552 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.457636 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.457688 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd"] Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.457977 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.458232 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.458275 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 09 16:25:34.458550 master-0 kubenswrapper[4090]: I0309 16:25:34.458472 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.460014 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c"] Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.460611 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.460818 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl"] Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.461297 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.461972 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.462109 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.462504 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.462780 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-6sknh"] Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.462818 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.462883 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.462947 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.463228 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.463297 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.463235 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.464757 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.464992 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.465080 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.464993 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.465245 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.465322 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-g8n5t"] Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.466144 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:34.466204 master-0 kubenswrapper[4090]: I0309 16:25:34.466173 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 09 16:25:34.472728 master-0 kubenswrapper[4090]: I0309 16:25:34.472640 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv"] Mar 09 16:25:34.473864 master-0 kubenswrapper[4090]: I0309 16:25:34.473470 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 09 16:25:34.473864 master-0 kubenswrapper[4090]: I0309 16:25:34.473780 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 09 16:25:34.474666 master-0 kubenswrapper[4090]: I0309 16:25:34.474499 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.474666 master-0 kubenswrapper[4090]: I0309 16:25:34.474574 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 09 16:25:34.476166 master-0 kubenswrapper[4090]: I0309 16:25:34.476126 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 09 16:25:34.478618 master-0 kubenswrapper[4090]: I0309 16:25:34.477956 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 09 16:25:34.478618 master-0 kubenswrapper[4090]: I0309 16:25:34.478084 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75"] Mar 09 16:25:34.478618 master-0 kubenswrapper[4090]: I0309 16:25:34.478118 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 09 16:25:34.478618 master-0 kubenswrapper[4090]: I0309 16:25:34.478245 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:34.478618 master-0 kubenswrapper[4090]: I0309 16:25:34.478445 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 09 16:25:34.481291 master-0 kubenswrapper[4090]: I0309 16:25:34.481235 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 09 16:25:34.482382 master-0 kubenswrapper[4090]: I0309 16:25:34.482334 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 09 16:25:34.482622 master-0 kubenswrapper[4090]: I0309 16:25:34.482604 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 09 16:25:34.483512 master-0 kubenswrapper[4090]: I0309 16:25:34.482920 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 09 16:25:34.507654 master-0 kubenswrapper[4090]: I0309 16:25:34.507570 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.507654 master-0 kubenswrapper[4090]: I0309 16:25:34.507604 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507629 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507694 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507714 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507734 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507752 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507772 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.507805 master-0 kubenswrapper[4090]: I0309 16:25:34.507794 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.507991 master-0 kubenswrapper[4090]: I0309 16:25:34.507828 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.507991 master-0 kubenswrapper[4090]: I0309 16:25:34.507846 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:34.507991 master-0 kubenswrapper[4090]: I0309 16:25:34.507868 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.507991 master-0 kubenswrapper[4090]: I0309 16:25:34.507936 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.508135 master-0 kubenswrapper[4090]: I0309 16:25:34.507995 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:34.508135 master-0 kubenswrapper[4090]: I0309 16:25:34.508000 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.508215 master-0 kubenswrapper[4090]: I0309 16:25:34.508004 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:34.508215 master-0 kubenswrapper[4090]: I0309 16:25:34.508151 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.508215 master-0 kubenswrapper[4090]: I0309 16:25:34.508191 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.508312 master-0 kubenswrapper[4090]: I0309 16:25:34.508216 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.508312 master-0 kubenswrapper[4090]: I0309 16:25:34.508241 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.508312 master-0 kubenswrapper[4090]: I0309 16:25:34.508268 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.508312 master-0 kubenswrapper[4090]: I0309 16:25:34.508287 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:34.508477 master-0 kubenswrapper[4090]: I0309 16:25:34.508323 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.508477 master-0 kubenswrapper[4090]: I0309 16:25:34.508399 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.508477 master-0 kubenswrapper[4090]: I0309 16:25:34.508471 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.508583 master-0 kubenswrapper[4090]: I0309 16:25:34.508499 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.508583 master-0 kubenswrapper[4090]: I0309 16:25:34.508539 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.508639 master-0 kubenswrapper[4090]: I0309 16:25:34.508592 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.508670 master-0 kubenswrapper[4090]: I0309 16:25:34.508645 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.508701 master-0 kubenswrapper[4090]: I0309 16:25:34.508675 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.508777 master-0 kubenswrapper[4090]: I0309 16:25:34.508738 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.508811 master-0 kubenswrapper[4090]: I0309 16:25:34.508778 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.508811 master-0 kubenswrapper[4090]: I0309 16:25:34.508801 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.508866 master-0 kubenswrapper[4090]: I0309 16:25:34.508820 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:34.508866 master-0 kubenswrapper[4090]: I0309 16:25:34.508841 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:34.508866 master-0 kubenswrapper[4090]: I0309 16:25:34.508861 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:34.508940 master-0 kubenswrapper[4090]: I0309 16:25:34.508886 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.508940 master-0 kubenswrapper[4090]: I0309 16:25:34.508908 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.508994 master-0 kubenswrapper[4090]: I0309 16:25:34.508952 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.509022 master-0 kubenswrapper[4090]: I0309 16:25:34.508996 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.509053 master-0 kubenswrapper[4090]: I0309 16:25:34.509027 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.509086 master-0 kubenswrapper[4090]: I0309 16:25:34.509054 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:34.509086 master-0 kubenswrapper[4090]: I0309 16:25:34.509078 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.509143 master-0 kubenswrapper[4090]: I0309 16:25:34.509109 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.509173 master-0 kubenswrapper[4090]: I0309 16:25:34.509162 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.509201 master-0 kubenswrapper[4090]: I0309 16:25:34.509180 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.509201 master-0 kubenswrapper[4090]: I0309 16:25:34.509195 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.509255 master-0 kubenswrapper[4090]: I0309 16:25:34.509214 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.509255 master-0 kubenswrapper[4090]: I0309 16:25:34.509230 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:34.509255 master-0 kubenswrapper[4090]: I0309 16:25:34.509245 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:34.509345 master-0 kubenswrapper[4090]: I0309 16:25:34.509296 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.509345 master-0 kubenswrapper[4090]: I0309 16:25:34.509326 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.509402 master-0 kubenswrapper[4090]: I0309 16:25:34.509352 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.509482 master-0 kubenswrapper[4090]: I0309 16:25:34.509371 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.509482 master-0 kubenswrapper[4090]: I0309 16:25:34.509461 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.509562 master-0 kubenswrapper[4090]: I0309 16:25:34.509484 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.509562 master-0 kubenswrapper[4090]: I0309 16:25:34.509509 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.509562 master-0 kubenswrapper[4090]: I0309 16:25:34.509530 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.509562 master-0 kubenswrapper[4090]: I0309 16:25:34.509556 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.509669 master-0 kubenswrapper[4090]: I0309 16:25:34.509577 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.509669 master-0 kubenswrapper[4090]: I0309 16:25:34.509599 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.509669 master-0 kubenswrapper[4090]: I0309 16:25:34.509626 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.509669 master-0 kubenswrapper[4090]: I0309 16:25:34.509649 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.509773 master-0 kubenswrapper[4090]: I0309 16:25:34.509715 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.509830 master-0 kubenswrapper[4090]: I0309 16:25:34.509790 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.509868 master-0 kubenswrapper[4090]: I0309 16:25:34.509834 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.509868 master-0 kubenswrapper[4090]: I0309 16:25:34.509859 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.509922 master-0 kubenswrapper[4090]: I0309 16:25:34.509879 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:34.509922 master-0 kubenswrapper[4090]: I0309 16:25:34.509902 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.509922 master-0 kubenswrapper[4090]: I0309 16:25:34.509906 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 16:25:34.510226 master-0 kubenswrapper[4090]: I0309 16:25:34.510185 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 16:25:34.510702 master-0 kubenswrapper[4090]: I0309 16:25:34.510623 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 09 16:25:34.585160 master-0 kubenswrapper[4090]: I0309 16:25:34.585086 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:34.610601 master-0 kubenswrapper[4090]: I0309 16:25:34.610554 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.610710 master-0 kubenswrapper[4090]: I0309 16:25:34.610608 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.610710 master-0 kubenswrapper[4090]: I0309 16:25:34.610639 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.610710 master-0 kubenswrapper[4090]: I0309 16:25:34.610660 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.610710 master-0 kubenswrapper[4090]: I0309 16:25:34.610683 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.610710 master-0 kubenswrapper[4090]: I0309 16:25:34.610703 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.610849 master-0 kubenswrapper[4090]: I0309 16:25:34.610724 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:34.610849 master-0 kubenswrapper[4090]: I0309 16:25:34.610748 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.610849 master-0 kubenswrapper[4090]: I0309 16:25:34.610770 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.610849 master-0 kubenswrapper[4090]: I0309 16:25:34.610792 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.610849 master-0 kubenswrapper[4090]: I0309 16:25:34.610817 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.610849 master-0 kubenswrapper[4090]: I0309 16:25:34.610840 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.611051 master-0 kubenswrapper[4090]: I0309 16:25:34.610925 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: E0309 16:25:34.611086 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: E0309 16:25:34.611175 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.111151581 +0000 UTC m=+128.286466630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: E0309 16:25:34.611335 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: E0309 16:25:34.611399 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.111380567 +0000 UTC m=+128.286695676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.611892 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.611934 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.611934 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.611988 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.611991 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.612081 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.612125 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.612149 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.612184 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: E0309 16:25:34.612209 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.612214 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.615521 master-0 kubenswrapper[4090]: I0309 16:25:34.612246 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: E0309 16:25:34.612268 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.112251122 +0000 UTC m=+128.287566182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612296 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612297 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612330 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612339 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612363 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612383 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612404 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612453 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612517 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612548 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: E0309 16:25:34.612562 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612566 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: E0309 16:25:34.612573 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:34.618507 master-0 kubenswrapper[4090]: I0309 16:25:34.612588 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: E0309 16:25:34.612611 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.112599723 +0000 UTC m=+128.287914712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612630 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612658 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612684 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612712 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612734 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612757 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612787 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612813 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612840 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612864 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612886 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: I0309 16:25:34.612910 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.619781 master-0 kubenswrapper[4090]: E0309 16:25:34.612917 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.612930 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: E0309 16:25:34.612945 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.112936502 +0000 UTC m=+128.288251491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.612965 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.612992 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613012 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613031 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613069 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613089 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613121 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613143 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613162 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613186 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613210 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613231 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:34.620389 master-0 kubenswrapper[4090]: I0309 16:25:34.613251 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613275 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613282 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613281 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613305 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613295 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: E0309 16:25:34.613439 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.113406557 +0000 UTC m=+128.288721656 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613553 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: E0309 16:25:34.613605 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613745 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.613784 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: E0309 16:25:34.613928 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: E0309 16:25:34.614204 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: E0309 16:25:34.614238 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.11422823 +0000 UTC m=+128.289543319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.614570 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.620936 master-0 kubenswrapper[4090]: I0309 16:25:34.615284 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.615520 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.616179 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.616673 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: E0309 16:25:34.615574 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.115532978 +0000 UTC m=+128.290847977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.616868 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.616911 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.616969 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.616988 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.617041 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.617254 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.617293 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.617344 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.617388 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: I0309 16:25:34.617500 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.621392 master-0 kubenswrapper[4090]: E0309 16:25:34.617936 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: E0309 16:25:34.617986 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.117969619 +0000 UTC m=+128.293284698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: I0309 16:25:34.617161 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: I0309 16:25:34.618042 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: I0309 16:25:34.618277 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: I0309 16:25:34.618567 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: I0309 16:25:34.618580 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: I0309 16:25:34.618695 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: E0309 16:25:34.618772 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: E0309 16:25:34.618828 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.118808454 +0000 UTC m=+128.294123543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:34.622005 master-0 kubenswrapper[4090]: E0309 16:25:34.618956 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.118943678 +0000 UTC m=+128.294258787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:34.625762 master-0 kubenswrapper[4090]: E0309 16:25:34.625714 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:34.625936 master-0 kubenswrapper[4090]: I0309 16:25:34.625904 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.625977 master-0 kubenswrapper[4090]: I0309 16:25:34.625928 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.626298 master-0 kubenswrapper[4090]: I0309 16:25:34.626264 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.626707 master-0 kubenswrapper[4090]: I0309 16:25:34.626670 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.626707 master-0 kubenswrapper[4090]: I0309 16:25:34.626680 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.627134 master-0 kubenswrapper[4090]: I0309 16:25:34.627090 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.627201 master-0 kubenswrapper[4090]: I0309 16:25:34.627168 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.627295 master-0 kubenswrapper[4090]: I0309 16:25:34.627262 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.631122 master-0 kubenswrapper[4090]: I0309 16:25:34.630311 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.631566 master-0 kubenswrapper[4090]: I0309 16:25:34.631525 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.631954 master-0 kubenswrapper[4090]: E0309 16:25:34.631916 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.125781077 +0000 UTC m=+128.301096066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:34.720579 master-0 kubenswrapper[4090]: I0309 16:25:34.720217 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:34.720579 master-0 kubenswrapper[4090]: E0309 16:25:34.720391 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:34.720847 master-0 kubenswrapper[4090]: E0309 16:25:34.720633 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:35.220611839 +0000 UTC m=+128.395926828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:34.720847 master-0 kubenswrapper[4090]: I0309 16:25:34.720694 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:34.923548 master-0 kubenswrapper[4090]: I0309 16:25:34.919574 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n"] Mar 09 16:25:34.923548 master-0 kubenswrapper[4090]: I0309 16:25:34.919631 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9"] Mar 09 16:25:34.923548 master-0 kubenswrapper[4090]: I0309 16:25:34.919773 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x"] Mar 09 16:25:34.923887 master-0 kubenswrapper[4090]: I0309 16:25:34.923753 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-xtmhw"] Mar 09 16:25:34.923887 master-0 kubenswrapper[4090]: I0309 16:25:34.923801 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf"] Mar 09 16:25:34.932474 master-0 kubenswrapper[4090]: I0309 16:25:34.924476 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9"] Mar 09 16:25:34.932474 master-0 kubenswrapper[4090]: I0309 16:25:34.925812 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj"] Mar 09 16:25:34.932474 master-0 kubenswrapper[4090]: I0309 16:25:34.927360 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc"] Mar 09 16:25:34.932760 master-0 kubenswrapper[4090]: I0309 16:25:34.932511 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv"] Mar 09 16:25:34.936453 master-0 kubenswrapper[4090]: I0309 16:25:34.935468 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk"] Mar 09 16:25:34.936453 master-0 kubenswrapper[4090]: I0309 16:25:34.935499 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt"] Mar 09 16:25:34.942096 master-0 kubenswrapper[4090]: I0309 16:25:34.938833 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-g8n5t"] Mar 09 16:25:34.942096 master-0 kubenswrapper[4090]: I0309 16:25:34.938893 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78"] Mar 09 16:25:34.942096 master-0 kubenswrapper[4090]: I0309 16:25:34.940108 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-g4tdb"] Mar 09 16:25:34.942096 master-0 kubenswrapper[4090]: I0309 16:25:34.940268 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.942096 master-0 kubenswrapper[4090]: I0309 16:25:34.940734 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:34.950495 master-0 kubenswrapper[4090]: I0309 16:25:34.947459 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:34.950495 master-0 kubenswrapper[4090]: I0309 16:25:34.948018 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:34.950495 master-0 kubenswrapper[4090]: I0309 16:25:34.948944 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:34.950495 master-0 kubenswrapper[4090]: I0309 16:25:34.948996 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 09 16:25:34.950495 master-0 kubenswrapper[4090]: I0309 16:25:34.949638 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:34.950945 master-0 kubenswrapper[4090]: I0309 16:25:34.950914 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5"] Mar 09 16:25:34.956722 master-0 kubenswrapper[4090]: I0309 16:25:34.951502 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:34.957546 master-0 kubenswrapper[4090]: I0309 16:25:34.957228 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp"] Mar 09 16:25:34.957546 master-0 kubenswrapper[4090]: I0309 16:25:34.957279 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c"] Mar 09 16:25:34.957546 master-0 kubenswrapper[4090]: I0309 16:25:34.957292 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc"] Mar 09 16:25:34.959220 master-0 kubenswrapper[4090]: I0309 16:25:34.959176 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:34.961013 master-0 kubenswrapper[4090]: I0309 16:25:34.960978 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:34.961153 master-0 kubenswrapper[4090]: I0309 16:25:34.961117 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:34.961214 master-0 kubenswrapper[4090]: I0309 16:25:34.961153 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:34.962621 master-0 kubenswrapper[4090]: I0309 16:25:34.962556 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.962688 master-0 kubenswrapper[4090]: I0309 16:25:34.962631 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd"] Mar 09 16:25:34.962688 master-0 kubenswrapper[4090]: I0309 16:25:34.962657 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:34.963551 master-0 kubenswrapper[4090]: I0309 16:25:34.963131 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:34.963551 master-0 kubenswrapper[4090]: I0309 16:25:34.963154 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:34.963551 master-0 kubenswrapper[4090]: I0309 16:25:34.963399 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:34.963926 master-0 kubenswrapper[4090]: I0309 16:25:34.963800 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:34.964411 master-0 kubenswrapper[4090]: I0309 16:25:34.964204 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:34.964647 master-0 kubenswrapper[4090]: I0309 16:25:34.964611 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:34.965346 master-0 kubenswrapper[4090]: I0309 16:25:34.964723 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:34.965346 master-0 kubenswrapper[4090]: I0309 16:25:34.964813 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:34.965346 master-0 kubenswrapper[4090]: I0309 16:25:34.965013 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:34.965483 master-0 kubenswrapper[4090]: I0309 16:25:34.965362 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:34.968167 master-0 kubenswrapper[4090]: I0309 16:25:34.966988 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:34.970289 master-0 kubenswrapper[4090]: I0309 16:25:34.969956 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75"] Mar 09 16:25:34.972231 master-0 kubenswrapper[4090]: I0309 16:25:34.972164 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:34.977620 master-0 kubenswrapper[4090]: I0309 16:25:34.977582 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4"] Mar 09 16:25:34.978849 master-0 kubenswrapper[4090]: I0309 16:25:34.978827 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl"] Mar 09 16:25:34.983088 master-0 kubenswrapper[4090]: I0309 16:25:34.983053 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-6sknh"] Mar 09 16:25:34.985510 master-0 kubenswrapper[4090]: I0309 16:25:34.985481 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw"] Mar 09 16:25:35.007104 master-0 kubenswrapper[4090]: I0309 16:25:35.007035 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" event={"ID":"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d","Type":"ContainerStarted","Data":"553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0"} Mar 09 16:25:35.024994 master-0 kubenswrapper[4090]: I0309 16:25:35.024871 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.025178 master-0 kubenswrapper[4090]: I0309 16:25:35.025010 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.025178 master-0 kubenswrapper[4090]: I0309 16:25:35.025040 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.073503 master-0 kubenswrapper[4090]: I0309 16:25:35.073404 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:35.089932 master-0 kubenswrapper[4090]: I0309 16:25:35.089867 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:35.095997 master-0 kubenswrapper[4090]: I0309 16:25:35.095966 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:35.127238 master-0 kubenswrapper[4090]: I0309 16:25:35.126808 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127530 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127607 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127656 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127679 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127705 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127732 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:35.127759 master-0 kubenswrapper[4090]: I0309 16:25:35.127758 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.127785 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.127809 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.127839 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.127885 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.127926 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.127960 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.128007 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:35.128051 master-0 kubenswrapper[4090]: I0309 16:25:35.128040 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128458 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128568 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: I0309 16:25:35.128594 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128631 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.128607145 +0000 UTC m=+129.303922144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128741 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128750 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128775 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.128762359 +0000 UTC m=+129.304077358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128803 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.12878218 +0000 UTC m=+129.304097239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128814 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128850 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.128838172 +0000 UTC m=+129.304153171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128858 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:35.128880 master-0 kubenswrapper[4090]: E0309 16:25:35.128884 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.128876723 +0000 UTC m=+129.304191792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.128900 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.128968 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.128956095 +0000 UTC m=+129.304271104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129003 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129037 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.129025147 +0000 UTC m=+129.304340146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129101 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129135 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.12912438 +0000 UTC m=+129.304439449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129156 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.12914507 +0000 UTC m=+129.304460159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129214 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129228 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129248 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.129236033 +0000 UTC m=+129.304551072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129267 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.129256144 +0000 UTC m=+129.304571213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129331 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: E0309 16:25:35.129366 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.129355066 +0000 UTC m=+129.304670115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:35.129713 master-0 kubenswrapper[4090]: I0309 16:25:35.129539 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.134802 master-0 kubenswrapper[4090]: I0309 16:25:35.133676 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:35.142617 master-0 kubenswrapper[4090]: I0309 16:25:35.141306 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:35.148788 master-0 kubenswrapper[4090]: I0309 16:25:35.148695 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:35.158299 master-0 kubenswrapper[4090]: I0309 16:25:35.157668 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:35.161699 master-0 kubenswrapper[4090]: I0309 16:25:35.161668 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:35.168577 master-0 kubenswrapper[4090]: I0309 16:25:35.168532 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.179525 master-0 kubenswrapper[4090]: I0309 16:25:35.176525 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:35.190851 master-0 kubenswrapper[4090]: I0309 16:25:35.189984 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:35.230664 master-0 kubenswrapper[4090]: I0309 16:25:35.230601 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:35.231306 master-0 kubenswrapper[4090]: E0309 16:25:35.230753 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:35.231306 master-0 kubenswrapper[4090]: E0309 16:25:35.230793 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:36.230780071 +0000 UTC m=+129.406095060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:35.291458 master-0 kubenswrapper[4090]: I0309 16:25:35.287623 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:35.297054 master-0 kubenswrapper[4090]: I0309 16:25:35.296452 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9"] Mar 09 16:25:35.318001 master-0 kubenswrapper[4090]: W0309 16:25:35.317959 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod457f42a7_f14c_4d61_a87a_bc1ed422feed.slice/crio-79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2 WatchSource:0}: Error finding container 79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2: Status 404 returned error can't find the container with id 79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2 Mar 09 16:25:35.500440 master-0 kubenswrapper[4090]: I0309 16:25:35.500261 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x"] Mar 09 16:25:35.501032 master-0 kubenswrapper[4090]: I0309 16:25:35.501001 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk"] Mar 09 16:25:35.506322 master-0 kubenswrapper[4090]: I0309 16:25:35.505636 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n"] Mar 09 16:25:35.506538 master-0 kubenswrapper[4090]: I0309 16:25:35.506318 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt"] Mar 09 16:25:35.508389 master-0 kubenswrapper[4090]: W0309 16:25:35.508346 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4dfdcc_e182_4831_98e4_1eedb069bcf6.slice/crio-39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46 WatchSource:0}: Error finding container 39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46: Status 404 returned error can't find the container with id 39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46 Mar 09 16:25:35.513016 master-0 kubenswrapper[4090]: W0309 16:25:35.512788 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a4491c_12cc_4531_ad3e_246e93ed7842.slice/crio-5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf WatchSource:0}: Error finding container 5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf: Status 404 returned error can't find the container with id 5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf Mar 09 16:25:35.814385 master-0 kubenswrapper[4090]: I0309 16:25:35.814340 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c"] Mar 09 16:25:35.816039 master-0 kubenswrapper[4090]: I0309 16:25:35.816007 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc"] Mar 09 16:25:35.826345 master-0 kubenswrapper[4090]: W0309 16:25:35.826291 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62ba179_443d_424f_8cff_c75677e8cd5c.slice/crio-9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa WatchSource:0}: Error finding container 9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa: Status 404 returned error can't find the container with id 9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa Mar 09 16:25:35.835335 master-0 kubenswrapper[4090]: I0309 16:25:35.835293 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp"] Mar 09 16:25:35.839646 master-0 kubenswrapper[4090]: I0309 16:25:35.839550 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw"] Mar 09 16:25:35.840370 master-0 kubenswrapper[4090]: I0309 16:25:35.840350 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78"] Mar 09 16:25:35.844448 master-0 kubenswrapper[4090]: I0309 16:25:35.841367 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj"] Mar 09 16:25:35.870516 master-0 kubenswrapper[4090]: W0309 16:25:35.870475 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f8ec87_ff04_4c3e_afe8_1b7898b22a0a.slice/crio-788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04 WatchSource:0}: Error finding container 788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04: Status 404 returned error can't find the container with id 788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04 Mar 09 16:25:35.880635 master-0 kubenswrapper[4090]: W0309 16:25:35.880182 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e38be5_1d33_4171_b27f_78a335f1590b.slice/crio-b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e WatchSource:0}: Error finding container b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e: Status 404 returned error can't find the container with id b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e Mar 09 16:25:36.011876 master-0 kubenswrapper[4090]: I0309 16:25:36.011814 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" event={"ID":"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a","Type":"ContainerStarted","Data":"788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04"} Mar 09 16:25:36.013270 master-0 kubenswrapper[4090]: I0309 16:25:36.013133 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" event={"ID":"3a612208-f777-486f-9dde-048b2d898c7f","Type":"ContainerStarted","Data":"ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210"} Mar 09 16:25:36.015707 master-0 kubenswrapper[4090]: I0309 16:25:36.015661 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" event={"ID":"d6912539-9b06-4e2c-b6a8-155df31147f2","Type":"ContainerStarted","Data":"a517766120d5207dbc0746849224568d7e6239234bc628933b81ef9e4c5bff53"} Mar 09 16:25:36.015819 master-0 kubenswrapper[4090]: I0309 16:25:36.015715 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" event={"ID":"d6912539-9b06-4e2c-b6a8-155df31147f2","Type":"ContainerStarted","Data":"8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6"} Mar 09 16:25:36.018725 master-0 kubenswrapper[4090]: I0309 16:25:36.018676 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" event={"ID":"6c4dfdcc-e182-4831-98e4-1eedb069bcf6","Type":"ContainerStarted","Data":"39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46"} Mar 09 16:25:36.024587 master-0 kubenswrapper[4090]: I0309 16:25:36.024537 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerStarted","Data":"84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab"} Mar 09 16:25:36.025721 master-0 kubenswrapper[4090]: I0309 16:25:36.025651 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerStarted","Data":"79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2"} Mar 09 16:25:36.026802 master-0 kubenswrapper[4090]: I0309 16:25:36.026774 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" event={"ID":"e2e38be5-1d33-4171-b27f-78a335f1590b","Type":"ContainerStarted","Data":"b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e"} Mar 09 16:25:36.027732 master-0 kubenswrapper[4090]: I0309 16:25:36.027696 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" event={"ID":"34a4491c-12cc-4531-ad3e-246e93ed7842","Type":"ContainerStarted","Data":"5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf"} Mar 09 16:25:36.028585 master-0 kubenswrapper[4090]: I0309 16:25:36.028559 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" event={"ID":"d2d3c20a-f92e-433b-9fbc-b667b7bcf175","Type":"ContainerStarted","Data":"cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764"} Mar 09 16:25:36.029590 master-0 kubenswrapper[4090]: I0309 16:25:36.029559 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-g4tdb" event={"ID":"709aad35-08ca-4ff5-abe5-e1558c8dc83f","Type":"ContainerStarted","Data":"461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203"} Mar 09 16:25:36.031219 master-0 kubenswrapper[4090]: I0309 16:25:36.031163 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" event={"ID":"a62ba179-443d-424f-8cff-c75677e8cd5c","Type":"ContainerStarted","Data":"9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa"} Mar 09 16:25:36.032608 master-0 kubenswrapper[4090]: I0309 16:25:36.032576 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" event={"ID":"6cf9eae5-38bc-48fa-8339-d0751bb18e8c","Type":"ContainerStarted","Data":"360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778"} Mar 09 16:25:36.061376 master-0 kubenswrapper[4090]: I0309 16:25:36.058462 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" podStartSLOduration=94.05844439 podStartE2EDuration="1m34.05844439s" podCreationTimestamp="2026-03-09 16:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:25:36.056382711 +0000 UTC m=+129.231697700" watchObservedRunningTime="2026-03-09 16:25:36.05844439 +0000 UTC m=+129.233759379" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.144528 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: E0309 16:25:36.144715 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: E0309 16:25:36.144802 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.144779165 +0000 UTC m=+131.320094154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.144858 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.144887 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.144913 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.144955 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.144982 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.145010 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: E0309 16:25:36.145036 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.145049 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: E0309 16:25:36.145075 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145064514 +0000 UTC m=+131.320379503 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: E0309 16:25:36.145073 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: I0309 16:25:36.145096 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:36.145586 master-0 kubenswrapper[4090]: E0309 16:25:36.145107 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145134 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145123655 +0000 UTC m=+131.320438734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145164 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145140636 +0000 UTC m=+131.320455625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145178 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: I0309 16:25:36.145196 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145209 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145199538 +0000 UTC m=+131.320514647 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: I0309 16:25:36.145230 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145246 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145262 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: I0309 16:25:36.145268 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145284 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.14527336 +0000 UTC m=+131.320588439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145304 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.14529437 +0000 UTC m=+131.320609499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145320 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145344 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145335922 +0000 UTC m=+131.320650981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145487 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:36.146279 master-0 kubenswrapper[4090]: E0309 16:25:36.145528 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:36.146791 master-0 kubenswrapper[4090]: E0309 16:25:36.145533 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145522708 +0000 UTC m=+131.320837777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:36.146791 master-0 kubenswrapper[4090]: E0309 16:25:36.145487 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:36.146791 master-0 kubenswrapper[4090]: E0309 16:25:36.145648 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145636941 +0000 UTC m=+131.320951930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:36.146791 master-0 kubenswrapper[4090]: E0309 16:25:36.145700 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145692513 +0000 UTC m=+131.321007582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:36.146791 master-0 kubenswrapper[4090]: E0309 16:25:36.145721 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:36.146791 master-0 kubenswrapper[4090]: E0309 16:25:36.145774 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.145764725 +0000 UTC m=+131.321079814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:36.246259 master-0 kubenswrapper[4090]: I0309 16:25:36.246193 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:36.246259 master-0 kubenswrapper[4090]: I0309 16:25:36.246259 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:36.246490 master-0 kubenswrapper[4090]: E0309 16:25:36.246412 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:36.246490 master-0 kubenswrapper[4090]: E0309 16:25:36.246479 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:40.246462708 +0000 UTC m=+193.421777687 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:36.246571 master-0 kubenswrapper[4090]: E0309 16:25:36.246538 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:36.246571 master-0 kubenswrapper[4090]: E0309 16:25:36.246559 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:38.246552501 +0000 UTC m=+131.421867490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:37.173661 master-0 kubenswrapper[4090]: I0309 16:25:37.172190 4090 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="cd074429ed45f5a8693a7e2dec95a69a0356de57104bf51c86da0531be3d00f3" exitCode=0 Mar 09 16:25:37.173661 master-0 kubenswrapper[4090]: I0309 16:25:37.173086 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerDied","Data":"cd074429ed45f5a8693a7e2dec95a69a0356de57104bf51c86da0531be3d00f3"} Mar 09 16:25:38.160395 master-0 kubenswrapper[4090]: I0309 16:25:38.160181 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:38.160395 master-0 kubenswrapper[4090]: I0309 16:25:38.160230 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:38.160395 master-0 kubenswrapper[4090]: I0309 16:25:38.160275 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:38.160395 master-0 kubenswrapper[4090]: I0309 16:25:38.160300 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:38.160395 master-0 kubenswrapper[4090]: I0309 16:25:38.160358 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:38.160395 master-0 kubenswrapper[4090]: I0309 16:25:38.160389 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160535 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160549 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160601 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.160587037 +0000 UTC m=+135.335902026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160613 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.160608547 +0000 UTC m=+135.335923536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: I0309 16:25:38.160633 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160664 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: I0309 16:25:38.160694 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: I0309 16:25:38.160729 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: I0309 16:25:38.160779 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: I0309 16:25:38.160814 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: I0309 16:25:38.160850 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160937 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:38.160983 master-0 kubenswrapper[4090]: E0309 16:25:38.160964 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.160954797 +0000 UTC m=+135.336269786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161022 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161027 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161044 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.1610367 +0000 UTC m=+135.336351689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161055 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.16104974 +0000 UTC m=+135.336364729 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161064 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.16105982 +0000 UTC m=+135.336374809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161084 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161121 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161162 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161198 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161265 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:38.161476 master-0 kubenswrapper[4090]: E0309 16:25:38.161371 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:38.161885 master-0 kubenswrapper[4090]: E0309 16:25:38.161836 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.161098452 +0000 UTC m=+135.336413441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:38.161885 master-0 kubenswrapper[4090]: E0309 16:25:38.161869 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.161863624 +0000 UTC m=+135.337178613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:38.162218 master-0 kubenswrapper[4090]: E0309 16:25:38.162053 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.161994967 +0000 UTC m=+135.337309956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:38.162218 master-0 kubenswrapper[4090]: E0309 16:25:38.162123 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.16208843 +0000 UTC m=+135.337403419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:38.162218 master-0 kubenswrapper[4090]: E0309 16:25:38.162137 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.162130561 +0000 UTC m=+135.337445550 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:38.162218 master-0 kubenswrapper[4090]: E0309 16:25:38.162159 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.162153432 +0000 UTC m=+135.337468421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:38.261716 master-0 kubenswrapper[4090]: I0309 16:25:38.261656 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:38.262405 master-0 kubenswrapper[4090]: E0309 16:25:38.261816 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:38.262405 master-0 kubenswrapper[4090]: E0309 16:25:38.261891 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:42.261870797 +0000 UTC m=+135.437185776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:42.217579 master-0 kubenswrapper[4090]: I0309 16:25:42.217494 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:42.217579 master-0 kubenswrapper[4090]: I0309 16:25:42.217548 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:42.217579 master-0 kubenswrapper[4090]: I0309 16:25:42.217579 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.217705 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.217795 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.217776038 +0000 UTC m=+143.393091027 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.217898 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.217955 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.217939263 +0000 UTC m=+143.393254252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.218023 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.218121 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218097587 +0000 UTC m=+143.393412656 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: I0309 16:25:42.218119 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.218191 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: I0309 16:25:42.218221 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.218252 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218239601 +0000 UTC m=+143.393554590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.218260 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: E0309 16:25:42.218291 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218280802 +0000 UTC m=+143.393595901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:42.218289 master-0 kubenswrapper[4090]: I0309 16:25:42.218286 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: I0309 16:25:42.218342 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218349 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218402 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218393905 +0000 UTC m=+143.393708974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: I0309 16:25:42.218465 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: I0309 16:25:42.218498 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218511 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218527 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: I0309 16:25:42.218536 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218562 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218566 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.21855572 +0000 UTC m=+143.393870769 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218590 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.21858142 +0000 UTC m=+143.393896519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218605 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: I0309 16:25:42.218611 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218643 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218631722 +0000 UTC m=+143.393946801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: I0309 16:25:42.218667 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:42.218774 master-0 kubenswrapper[4090]: E0309 16:25:42.218688 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:42.219316 master-0 kubenswrapper[4090]: E0309 16:25:42.218737 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218719814 +0000 UTC m=+143.394034803 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:42.219316 master-0 kubenswrapper[4090]: E0309 16:25:42.218742 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:42.219316 master-0 kubenswrapper[4090]: E0309 16:25:42.218759 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218745065 +0000 UTC m=+143.394060154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:42.219316 master-0 kubenswrapper[4090]: E0309 16:25:42.218781 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.218772237 +0000 UTC m=+143.394087336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:42.320272 master-0 kubenswrapper[4090]: I0309 16:25:42.320106 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:42.320527 master-0 kubenswrapper[4090]: E0309 16:25:42.320307 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:42.320527 master-0 kubenswrapper[4090]: E0309 16:25:42.320392 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.320370937 +0000 UTC m=+143.495685926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:42.845192 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 09 16:25:42.866207 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 09 16:25:42.866618 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 09 16:25:42.867897 master-0 systemd[1]: kubelet.service: Consumed 9.521s CPU time. Mar 09 16:25:42.887587 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 09 16:25:42.994282 master-0 kubenswrapper[7604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:25:42.994282 master-0 kubenswrapper[7604]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 09 16:25:42.994282 master-0 kubenswrapper[7604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:25:42.994282 master-0 kubenswrapper[7604]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:25:42.995473 master-0 kubenswrapper[7604]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 09 16:25:42.995473 master-0 kubenswrapper[7604]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:25:42.995473 master-0 kubenswrapper[7604]: I0309 16:25:42.994405 7604 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002211 7604 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002232 7604 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002237 7604 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002244 7604 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002249 7604 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002255 7604 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002259 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002263 7604 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002293 7604 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002299 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:25:43.004641 master-0 kubenswrapper[7604]: W0309 16:25:43.002304 7604 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:25:43.005988 master-0 kubenswrapper[7604]: W0309 16:25:43.002308 7604 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.005997 7604 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006014 7604 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006018 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006023 7604 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006028 7604 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006033 7604 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006037 7604 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006041 7604 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006044 7604 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006048 7604 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006056 7604 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006062 7604 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006067 7604 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006075 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006080 7604 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:25:43.006062 master-0 kubenswrapper[7604]: W0309 16:25:43.006085 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006089 7604 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006093 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006097 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006101 7604 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006106 7604 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006109 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006113 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006117 7604 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006120 7604 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006124 7604 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006128 7604 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006132 7604 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006135 7604 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006139 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006144 7604 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006147 7604 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006151 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006157 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006161 7604 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:25:43.006605 master-0 kubenswrapper[7604]: W0309 16:25:43.006165 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006169 7604 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006173 7604 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006179 7604 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006184 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006187 7604 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006191 7604 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006194 7604 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006198 7604 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006202 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006206 7604 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006210 7604 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006215 7604 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006219 7604 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006223 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006226 7604 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006230 7604 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006234 7604 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006238 7604 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006242 7604 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:25:43.007145 master-0 kubenswrapper[7604]: W0309 16:25:43.006246 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: W0309 16:25:43.006250 7604 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: W0309 16:25:43.006254 7604 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: W0309 16:25:43.006257 7604 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: W0309 16:25:43.006261 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006394 7604 flags.go:64] FLAG: --address="0.0.0.0" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006405 7604 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006414 7604 flags.go:64] FLAG: --anonymous-auth="true" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006436 7604 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006442 7604 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006446 7604 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006453 7604 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006459 7604 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006464 7604 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006468 7604 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006473 7604 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006478 7604 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006482 7604 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006487 7604 flags.go:64] FLAG: --cgroup-root="" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006491 7604 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006495 7604 flags.go:64] FLAG: --client-ca-file="" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006500 7604 flags.go:64] FLAG: --cloud-config="" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006504 7604 flags.go:64] FLAG: --cloud-provider="" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006508 7604 flags.go:64] FLAG: --cluster-dns="[]" Mar 09 16:25:43.007702 master-0 kubenswrapper[7604]: I0309 16:25:43.006516 7604 flags.go:64] FLAG: --cluster-domain="" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006521 7604 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006526 7604 flags.go:64] FLAG: --config-dir="" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006530 7604 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006536 7604 flags.go:64] FLAG: --container-log-max-files="5" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006542 7604 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006547 7604 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006551 7604 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006556 7604 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006561 7604 flags.go:64] FLAG: --contention-profiling="false" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006566 7604 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006570 7604 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006574 7604 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006578 7604 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006584 7604 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006588 7604 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006592 7604 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006596 7604 flags.go:64] FLAG: --enable-load-reader="false" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006601 7604 flags.go:64] FLAG: --enable-server="true" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006605 7604 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006613 7604 flags.go:64] FLAG: --event-burst="100" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006617 7604 flags.go:64] FLAG: --event-qps="50" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006622 7604 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006627 7604 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006631 7604 flags.go:64] FLAG: --eviction-hard="" Mar 09 16:25:43.008259 master-0 kubenswrapper[7604]: I0309 16:25:43.006637 7604 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006642 7604 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006647 7604 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006651 7604 flags.go:64] FLAG: --eviction-soft="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006656 7604 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006660 7604 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006664 7604 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006668 7604 flags.go:64] FLAG: --experimental-mounter-path="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006673 7604 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006678 7604 flags.go:64] FLAG: --fail-swap-on="true" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006682 7604 flags.go:64] FLAG: --feature-gates="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006687 7604 flags.go:64] FLAG: --file-check-frequency="20s" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006692 7604 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006696 7604 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006701 7604 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006705 7604 flags.go:64] FLAG: --healthz-port="10248" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006709 7604 flags.go:64] FLAG: --help="false" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006714 7604 flags.go:64] FLAG: --hostname-override="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006718 7604 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006723 7604 flags.go:64] FLAG: --http-check-frequency="20s" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006727 7604 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006731 7604 flags.go:64] FLAG: --image-credential-provider-config="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006735 7604 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006740 7604 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006744 7604 flags.go:64] FLAG: --image-service-endpoint="" Mar 09 16:25:43.009023 master-0 kubenswrapper[7604]: I0309 16:25:43.006748 7604 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006753 7604 flags.go:64] FLAG: --kube-api-burst="100" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006757 7604 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006761 7604 flags.go:64] FLAG: --kube-api-qps="50" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006765 7604 flags.go:64] FLAG: --kube-reserved="" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006769 7604 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006773 7604 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006777 7604 flags.go:64] FLAG: --kubelet-cgroups="" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006782 7604 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006786 7604 flags.go:64] FLAG: --lock-file="" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006789 7604 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006794 7604 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006798 7604 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006806 7604 flags.go:64] FLAG: --log-json-split-stream="false" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006810 7604 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006816 7604 flags.go:64] FLAG: --log-text-split-stream="false" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006821 7604 flags.go:64] FLAG: --logging-format="text" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006826 7604 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006832 7604 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006836 7604 flags.go:64] FLAG: --manifest-url="" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006841 7604 flags.go:64] FLAG: --manifest-url-header="" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006847 7604 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006852 7604 flags.go:64] FLAG: --max-open-files="1000000" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006858 7604 flags.go:64] FLAG: --max-pods="110" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006862 7604 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 09 16:25:43.009747 master-0 kubenswrapper[7604]: I0309 16:25:43.006866 7604 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006871 7604 flags.go:64] FLAG: --memory-manager-policy="None" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006875 7604 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006879 7604 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006884 7604 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006888 7604 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006903 7604 flags.go:64] FLAG: --node-status-max-images="50" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006907 7604 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006912 7604 flags.go:64] FLAG: --oom-score-adj="-999" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006916 7604 flags.go:64] FLAG: --pod-cidr="" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006920 7604 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006927 7604 flags.go:64] FLAG: --pod-manifest-path="" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006932 7604 flags.go:64] FLAG: --pod-max-pids="-1" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006936 7604 flags.go:64] FLAG: --pods-per-core="0" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006940 7604 flags.go:64] FLAG: --port="10250" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006944 7604 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006949 7604 flags.go:64] FLAG: --provider-id="" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006953 7604 flags.go:64] FLAG: --qos-reserved="" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006957 7604 flags.go:64] FLAG: --read-only-port="10255" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006961 7604 flags.go:64] FLAG: --register-node="true" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006965 7604 flags.go:64] FLAG: --register-schedulable="true" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006970 7604 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006979 7604 flags.go:64] FLAG: --registry-burst="10" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006983 7604 flags.go:64] FLAG: --registry-qps="5" Mar 09 16:25:43.010544 master-0 kubenswrapper[7604]: I0309 16:25:43.006989 7604 flags.go:64] FLAG: --reserved-cpus="" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.006993 7604 flags.go:64] FLAG: --reserved-memory="" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.006998 7604 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007002 7604 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007007 7604 flags.go:64] FLAG: --rotate-certificates="false" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007020 7604 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007025 7604 flags.go:64] FLAG: --runonce="false" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007029 7604 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007033 7604 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007038 7604 flags.go:64] FLAG: --seccomp-default="false" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007042 7604 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007046 7604 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007050 7604 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007055 7604 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007060 7604 flags.go:64] FLAG: --storage-driver-password="root" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007064 7604 flags.go:64] FLAG: --storage-driver-secure="false" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007068 7604 flags.go:64] FLAG: --storage-driver-table="stats" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007072 7604 flags.go:64] FLAG: --storage-driver-user="root" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007077 7604 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007082 7604 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007086 7604 flags.go:64] FLAG: --system-cgroups="" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007090 7604 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007099 7604 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007103 7604 flags.go:64] FLAG: --tls-cert-file="" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007107 7604 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 09 16:25:43.011326 master-0 kubenswrapper[7604]: I0309 16:25:43.007114 7604 flags.go:64] FLAG: --tls-min-version="" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007118 7604 flags.go:64] FLAG: --tls-private-key-file="" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007122 7604 flags.go:64] FLAG: --topology-manager-policy="none" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007126 7604 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007131 7604 flags.go:64] FLAG: --topology-manager-scope="container" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007135 7604 flags.go:64] FLAG: --v="2" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007142 7604 flags.go:64] FLAG: --version="false" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007150 7604 flags.go:64] FLAG: --vmodule="" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007156 7604 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: I0309 16:25:43.007161 7604 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007287 7604 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007294 7604 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007298 7604 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007303 7604 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007326 7604 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007332 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007336 7604 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007341 7604 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007345 7604 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007348 7604 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007353 7604 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007356 7604 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:25:43.012348 master-0 kubenswrapper[7604]: W0309 16:25:43.007360 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007364 7604 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007367 7604 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007371 7604 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007377 7604 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007382 7604 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007386 7604 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007390 7604 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007394 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007399 7604 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007402 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007406 7604 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007411 7604 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007416 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007442 7604 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007446 7604 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007450 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007455 7604 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007459 7604 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:25:43.012966 master-0 kubenswrapper[7604]: W0309 16:25:43.007462 7604 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007466 7604 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007470 7604 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007476 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007480 7604 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007484 7604 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007488 7604 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007492 7604 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007496 7604 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007499 7604 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007503 7604 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007506 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007510 7604 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007514 7604 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007518 7604 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007522 7604 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007526 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007529 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007533 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007537 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:25:43.013832 master-0 kubenswrapper[7604]: W0309 16:25:43.007541 7604 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007544 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007548 7604 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007552 7604 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007555 7604 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007559 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007563 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007567 7604 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007570 7604 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007574 7604 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007578 7604 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007583 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007588 7604 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007592 7604 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007595 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007599 7604 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007603 7604 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007608 7604 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007613 7604 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:25:43.014392 master-0 kubenswrapper[7604]: W0309 16:25:43.007617 7604 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:25:43.014863 master-0 kubenswrapper[7604]: W0309 16:25:43.007621 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:25:43.014863 master-0 kubenswrapper[7604]: I0309 16:25:43.007633 7604 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:25:43.014863 master-0 kubenswrapper[7604]: I0309 16:25:43.014767 7604 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 09 16:25:43.014863 master-0 kubenswrapper[7604]: I0309 16:25:43.014808 7604 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014880 7604 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014890 7604 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014895 7604 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014901 7604 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014905 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014910 7604 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014915 7604 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014919 7604 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014924 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014930 7604 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014935 7604 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014939 7604 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014944 7604 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014949 7604 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014954 7604 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014960 7604 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014966 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014970 7604 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:25:43.014968 master-0 kubenswrapper[7604]: W0309 16:25:43.014976 7604 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.014981 7604 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.014986 7604 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.014992 7604 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.014998 7604 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015002 7604 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015007 7604 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015011 7604 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015015 7604 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015020 7604 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015024 7604 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015029 7604 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015070 7604 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015075 7604 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015079 7604 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015084 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015092 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015097 7604 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015102 7604 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015106 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:25:43.015514 master-0 kubenswrapper[7604]: W0309 16:25:43.015111 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015115 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015120 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015124 7604 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015129 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015134 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015139 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015144 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015150 7604 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015156 7604 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015162 7604 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015167 7604 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015172 7604 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015177 7604 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015183 7604 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015188 7604 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015192 7604 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015196 7604 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015201 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:25:43.015981 master-0 kubenswrapper[7604]: W0309 16:25:43.015206 7604 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015212 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015217 7604 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015220 7604 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015224 7604 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015229 7604 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015234 7604 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015238 7604 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015242 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015247 7604 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015251 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015255 7604 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015259 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015264 7604 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: W0309 16:25:43.015268 7604 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:25:43.016418 master-0 kubenswrapper[7604]: I0309 16:25:43.015275 7604 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015417 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015444 7604 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015449 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015454 7604 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015459 7604 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015463 7604 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015467 7604 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015472 7604 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015478 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015483 7604 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015489 7604 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015494 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015499 7604 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015503 7604 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015508 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015512 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015517 7604 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015521 7604 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015527 7604 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:25:43.016793 master-0 kubenswrapper[7604]: W0309 16:25:43.015534 7604 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015539 7604 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015544 7604 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015549 7604 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015555 7604 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015560 7604 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015566 7604 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015571 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015575 7604 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015580 7604 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015585 7604 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015590 7604 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015594 7604 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015598 7604 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015604 7604 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015609 7604 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015613 7604 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015617 7604 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015621 7604 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015625 7604 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:25:43.017250 master-0 kubenswrapper[7604]: W0309 16:25:43.015629 7604 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015634 7604 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015638 7604 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015642 7604 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015646 7604 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015650 7604 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015654 7604 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015658 7604 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015661 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015666 7604 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015670 7604 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015674 7604 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015678 7604 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015683 7604 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015686 7604 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015690 7604 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015693 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015697 7604 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015701 7604 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:25:43.017803 master-0 kubenswrapper[7604]: W0309 16:25:43.015706 7604 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015710 7604 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015714 7604 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015718 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015722 7604 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015726 7604 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015730 7604 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015734 7604 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015738 7604 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015742 7604 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015746 7604 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015751 7604 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015756 7604 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: W0309 16:25:43.015760 7604 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: I0309 16:25:43.015767 7604 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:25:43.018244 master-0 kubenswrapper[7604]: I0309 16:25:43.015967 7604 server.go:940] "Client rotation is on, will bootstrap in background" Mar 09 16:25:43.018614 master-0 kubenswrapper[7604]: I0309 16:25:43.017769 7604 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 09 16:25:43.018614 master-0 kubenswrapper[7604]: I0309 16:25:43.017887 7604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 09 16:25:43.018614 master-0 kubenswrapper[7604]: I0309 16:25:43.018191 7604 server.go:997] "Starting client certificate rotation" Mar 09 16:25:43.018614 master-0 kubenswrapper[7604]: I0309 16:25:43.018203 7604 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 09 16:25:43.018614 master-0 kubenswrapper[7604]: I0309 16:25:43.018347 7604 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 12:35:38.502000104 +0000 UTC Mar 09 16:25:43.018614 master-0 kubenswrapper[7604]: I0309 16:25:43.018387 7604 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h9m55.483615296s for next certificate rotation Mar 09 16:25:43.019138 master-0 kubenswrapper[7604]: I0309 16:25:43.019103 7604 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:25:43.020402 master-0 kubenswrapper[7604]: I0309 16:25:43.020370 7604 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:25:43.023236 master-0 kubenswrapper[7604]: I0309 16:25:43.023198 7604 log.go:25] "Validated CRI v1 runtime API" Mar 09 16:25:43.025487 master-0 kubenswrapper[7604]: I0309 16:25:43.025445 7604 log.go:25] "Validated CRI v1 image API" Mar 09 16:25:43.026831 master-0 kubenswrapper[7604]: I0309 16:25:43.026809 7604 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 09 16:25:43.031355 master-0 kubenswrapper[7604]: I0309 16:25:43.031259 7604 fs.go:135] Filesystem UUIDs: map[4d92f182-6acb-4a41-8103-6903266f66d5:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 09 16:25:43.031956 master-0 kubenswrapper[7604]: I0309 16:25:43.031344 7604 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec/userdata/shm major:0 minor:138 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0/userdata/shm major:0 minor:214 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70/userdata/shm major:0 minor:102 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~projected/kube-api-access-hkrlr:{mountpoint:/var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~projected/kube-api-access-hkrlr major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~projected/kube-api-access-xkjv9:{mountpoint:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~projected/kube-api-access-xkjv9 major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ba020e0-1728-4e56-9618-d0ec3d9126eb/volumes/kubernetes.io~projected/kube-api-access-tnw68:{mountpoint:/var/lib/kubelet/pods/1ba020e0-1728-4e56-9618-d0ec3d9126eb/volumes/kubernetes.io~projected/kube-api-access-tnw68 major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~projected/kube-api-access-4zxck:{mountpoint:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~projected/kube-api-access-4zxck major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/kube-api-access-psgk6:{mountpoint:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/kube-api-access-psgk6 major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~projected/kube-api-access major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~projected/kube-api-access-j244n:{mountpoint:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~projected/kube-api-access-j244n major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~secret/serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~projected/kube-api-access-497s5:{mountpoint:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~projected/kube-api-access-497s5 major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bd3c489-427c-4a47-b7b9-5d1611b9be12/volumes/kubernetes.io~projected/kube-api-access-gc9jl:{mountpoint:/var/lib/kubelet/pods/4bd3c489-427c-4a47-b7b9-5d1611b9be12/volumes/kubernetes.io~projected/kube-api-access-gc9jl major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~projected/kube-api-access-rrt7m:{mountpoint:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~projected/kube-api-access-rrt7m major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~projected/kube-api-access-782hr:{mountpoint:/var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~projected/kube-api-access-782hr major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~projected/kube-api-access-kvh62:{mountpoint:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~projected/kube-api-access-kvh62 major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~secret/webhook-cert major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~projected/kube-api-access-bdmsj:{mountpoint:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~projected/kube-api-access-bdmsj major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/etcd-client major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~projected/kube-api-access-98llp:{mountpoint:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~projected/kube-api-access-98llp major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/709aad35-08ca-4ff5-abe5-e1558c8dc83f/volumes/kubernetes.io~projected/kube-api-access-579rp:{mountpoint:/var/lib/kubelet/pods/709aad35-08ca-4ff5-abe5-e1558c8dc83f/volumes/kubernetes.io~projected/kube-api-access-579rp major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~projected/kube-api-access-4p2nd:{mountpoint:/var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~projected/kube-api-access-4p2nd major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77a20946-c236-417e-8333-6d1aac88bbc2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/77a20946-c236-417e-8333-6d1aac88bbc2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~projected/kube-api-access-fv95c:{mountpoint:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~projected/kube-api-access-fv95c major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a62ba179-443d-424f-8cff-c75677e8cd5c/volumes/kubernetes.io~projected/kube-api-access-z242f:{mountpoint:/var/lib/kubelet/pods/a62ba179-443d-424f-8cff-c75677e8cd5c/volumes/kubernetes.io~projected/kube-api-access-z242f major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~projected/kube-api-access-pr46z:{mountpoint:/var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~projected/kube-api-access-pr46z major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~projected/kube-api-access-vqcqb:{mountpoint:/var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~projected/kube-api-access-vqcqb major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~projected/kube-api-access-nl7dv:{mountpoint:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~projected/kube-api-access-nl7dv major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~secret/serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~projected/kube-api-access-sst4g:{mountpoint:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~projected/kube-api-access-sst4g major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df2ec8b2-02d7-40c4-ac20-32615d689697/volumes/kubernetes.io~projected/kube-api-access-rfj7p:{mountpoint:/var/lib/kubelet/pods/df2ec8b2-02d7-40c4-ac20-32615d689697/volumes/kubernetes.io~projected/kube-api-access-rfj7p major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~projected/kube-api-access-ctsqs:{mountpoint:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~projected/kube-api-access-ctsqs major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~projected/kube-api-access-whqvw:{mountpoint:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~projected/kube-api-access-whqvw major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~projected/kube-api-access-p9dfn:{mountpoint:/var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~projected/kube-api-access-p9dfn major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/kube-api-access-5trxh:{mountpoint:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/kube-api-access-5trxh major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~projected/kube-api-access-98j7c:{mountpoint:/var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~projected/kube-api-access-98j7c major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~projected/kube-api-access-55zwh:{mountpoint:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~projected/kube-api-access-55zwh major:0 minor:252 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/b7f0551f790d4e9259e587ec1641c9a6da7b371e90cacd06b0b8afea64076ff7/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/41e23514274abc4c424555dab3a75bc6870409d458ee6cba0e89a5c91d75cee4/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/c997df8b80ca33eaac6f540b60935c52454eaad1dd60731300d4674a77b66b4c/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/51315601588b05b9d577e82c215e5b9d4a2de05e9be7dd68b12e3ccf19e1296c/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/5f202d04cd1b857ba9d84b656d848ea137b7304614dbd93071965d89855cabc5/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/8f030d4ef427335519c7c4860b808d0fd4281eff1af384f795942d886bfee2f7/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/09dd65f493d1bfc2d6bfbcedaeae27248f9e547de9ac397ac31ef3f34bf605f2/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/1160aac67a776a5c0ac3b43107ef0dd6a64de95e3f618b88f04aa5c45858980c/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-145:{mountpoint:/var/lib/containers/storage/overlay/ebf97b0f8d4139acce2032a3b358832cb2a8a3ed004bdb46c319ac33ce9f5c1e/merged major:0 minor:145 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/611b546a073ffe1d4bc64a5ed52c21c0b2487d2d7228cb02c8be7667a8782247/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/d02b6a99360bdc74d721b525e3fc1f06cc0f4d9679bd8c9d6b4b32080f552ef0/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/2d547e8f15944913babddadd93a6a64d0c93d66500dedcfc24a1f43fca428186/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/bd6264d528467067b9f48023048889fc824bcab3ce4f68a9ce8b0723e5fe377f/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/50c31523fc98ee9ad8c89a5c7becdab72c1f9082ba9a3f83908edbab96bb113f/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/fb79309b447269cf4f3e9e4237d8d0e9a4d1cef082fcd6509129c04db7b55998/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/6d85c40ca368b51328ba3809cb38b2886d9094fdc1de4fdc4bdd4919f65b26cc/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/ebbf897fe843d00b20b2dbbcb7a74d04c54323c4fa1bd5bd56aa7863af5ddbc8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/c3f1ba8e36111adb5cd1969876084945ed98684c65a962de8d588e8624234162/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/fe72b238e05c936faa97c18a1b91c6e64a09f038824e8204dd84a0e50df6d40d/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/04aa083832ffea3f422199e4f39ab995168e07582b05397cbb17e49eebdf72de/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/a36798aa3d9273b91ed5bdcfb87cb3788c7eeca462850f6f337519bf42a2dcd1/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/3126eecda6f4dc7ebf1a8847c8757ec57a1308595f9eb6b0a25fc58f38bc8e5a/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/205ed43c60b29132ab40eca10004c10df86adf4ccadb97dd10b36a4e85cf4b14/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/f8139dad7fa7db917cb97d2d68a508de283a0678544e8bf9401c85f280b344a0/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/facb76d782da8aee3773409c7ba9a73ab8130527113c0ca4637488124f3812f6/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/3c79bbcc273d41b4ad58ddf940c7c68c70f557d8b03b9aa7a31a08d0558b3a00/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/9677e1ab02b447f591512896b96b77274c1b9d26814ca78e5d21b688e34a4224/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/a46b9d763c1dea9e124c3550d7ee2ecfbfa08d99e3158a28bd2f071c543e29e4/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/3a9b659362ea67fa7db9e03817e6a3d422cfe2cf0ef1cc769847d092dd4f7f05/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/222655e55e073248b97fc43dc315d3eee43c67bcc46f0b6d2feccf0d28f2b4db/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/3b644279042c82404607fce4b8123ea39016085cbb3f6e49d50fddcda1bec701/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/7263ac980487d0ecc386858b3ef4eabacf2d6412c025bd422d6c3a5877e074da/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/937e72a6b6dea06e7ed4c28b77c68430df83bb5f59c4a904da2ddfccdd940f5d/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/df2db2e8cb8f3e5c26bd91bf47520cc2d2b15fc360a0b5385f39d3aa1647799e/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/a71b5a4239963acbd392f677f644a24aa46fa062a4ec107ad0b85cdf9efe4766/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/12405b8d102630076cff1acc8970de4d913b6bfe437b7467c8144fa58ef5248e/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/fd5e8c788a9acb67c57b7d6041772ad7e8b6893649a7a2ca89b2f8a2e25d63ec/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/da2e716be0df74045cb8ef1be4a1660135fbb2a862c829733288ec6bca7bbe7a/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/8e656fe24b7bbe7dd7319655f953f460148ae03deba86de826ce526ea8eb8026/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/b7dafc1abf2fdea72250629562e5b5bdb26135686b08e70bd8cf9f96515527fc/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/c34935decba72a8ef385126d41f7052331d157559c3190643e777710b6ec6bea/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/a7fa9ca7bf89ee8d3bc926bcd9b57a4ba9d8a9cfde8ebb9dcc37b2ad2a5d42e9/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/72e1ae206568369c1a31c1cdd6d4c25b83fd7ba88f55ccc47f7c771ca8f45f6d/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/87faa253f073012cf5a37a4daa32e1ebfaa5847f2aebe4f9847c421c3dfa5173/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/a43d7f84e71ecf6a6a10c2a141735899e96d60c72e538f7141f3106bbc65c573/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/c20e940e6f5c73c63d023132d0026dc302f413e238e2270b06f384ec2e56fffe/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/be737f056795cd0b26d6adf9308b662675732f6267743db2aeaf92d328e2856d/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/f89ca4f5df93f84df9cecdb836c2c0ff98ed98f665e843297de4c59bac7d1baa/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 09 16:25:43.054447 master-0 kubenswrapper[7604]: I0309 16:25:43.053939 7604 manager.go:217] Machine: {Timestamp:2026-03-09 16:25:43.053103962 +0000 UTC m=+0.107073405 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:f32a84ce369a40d4b790587e3ee415c9 SystemUUID:f32a84ce-369a-40d4-b790-587e3ee415c9 BootID:14726782-964f-4d13-8ec1-f1921737ccdf Filesystems:[{Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~projected/kube-api-access-4zxck DeviceMajor:0 DeviceMinor:249 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bd3c489-427c-4a47-b7b9-5d1611b9be12/volumes/kubernetes.io~projected/kube-api-access-gc9jl DeviceMajor:0 DeviceMinor:242 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/a62ba179-443d-424f-8cff-c75677e8cd5c/volumes/kubernetes.io~projected/kube-api-access-z242f DeviceMajor:0 DeviceMinor:244 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0/userdata/shm DeviceMajor:0 DeviceMinor:214 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:247 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~projected/kube-api-access-xkjv9 DeviceMajor:0 DeviceMinor:213 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~projected/kube-api-access-hkrlr DeviceMajor:0 DeviceMinor:241 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:220 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-145 DeviceMajor:0 DeviceMinor:145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:237 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70/userdata/shm DeviceMajor:0 DeviceMinor:102 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~projected/kube-api-access-782hr DeviceMajor:0 DeviceMinor:246 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~projected/kube-api-access-vqcqb DeviceMajor:0 DeviceMinor:239 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~projected/kube-api-access-nl7dv DeviceMajor:0 DeviceMinor:236 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~projected/kube-api-access-bdmsj DeviceMajor:0 DeviceMinor:229 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:136 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~projected/kube-api-access-sst4g DeviceMajor:0 DeviceMinor:240 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ba020e0-1728-4e56-9618-d0ec3d9126eb/volumes/kubernetes.io~projected/kube-api-access-tnw68 DeviceMajor:0 DeviceMinor:112 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~projected/kube-api-access-55zwh DeviceMajor:0 DeviceMinor:252 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~projected/kube-api-access-kvh62 DeviceMajor:0 DeviceMinor:126 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~projected/kube-api-access-4p2nd DeviceMajor:0 DeviceMinor:233 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~projected/kube-api-access-j244n DeviceMajor:0 DeviceMinor:248 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~projected/kube-api-access-497s5 DeviceMajor:0 DeviceMinor:235 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:127 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/df2ec8b2-02d7-40c4-ac20-32615d689697/volumes/kubernetes.io~projected/kube-api-access-rfj7p DeviceMajor:0 DeviceMinor:98 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/709aad35-08ca-4ff5-abe5-e1558c8dc83f/volumes/kubernetes.io~projected/kube-api-access-579rp DeviceMajor:0 DeviceMinor:268 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~projected/kube-api-access-rrt7m DeviceMajor:0 DeviceMinor:99 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:232 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~projected/kube-api-access-98j7c DeviceMajor:0 DeviceMinor:230 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~projected/kube-api-access-p9dfn DeviceMajor:0 DeviceMinor:123 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~projected/kube-api-access-98llp DeviceMajor:0 DeviceMinor:137 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~projected/kube-api-access-whqvw DeviceMajor:0 DeviceMinor:125 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~projected/kube-api-access-pr46z DeviceMajor:0 DeviceMinor:234 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~projected/kube-api-access-ctsqs DeviceMajor:0 DeviceMinor:243 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/kube-api-access-psgk6 DeviceMajor:0 DeviceMinor:228 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/kube-api-access-5trxh DeviceMajor:0 DeviceMinor:227 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~projected/kube-api-access-fv95c DeviceMajor:0 DeviceMinor:238 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:245 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77a20946-c236-417e-8333-6d1aac88bbc2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:94 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec/userdata/shm DeviceMajor:0 DeviceMinor:138 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:360673ea108cd41 MacAddress:72:0f:63:e1:ff:b2 Speed:10000 Mtu:8900} {Name:39187f3f3774db7 MacAddress:1a:18:3d:3a:34:3c Speed:10000 Mtu:8900} {Name:553046f43046d3f MacAddress:6a:ca:b2:d9:93:d7 Speed:10000 Mtu:8900} {Name:5b2e2b8431e578f MacAddress:a2:b8:2d:12:99:c3 Speed:10000 Mtu:8900} {Name:788337cf1e09325 MacAddress:9a:4d:24:72:90:1d Speed:10000 Mtu:8900} {Name:79d594aa0207008 MacAddress:96:b7:0c:7c:a7:18 Speed:10000 Mtu:8900} {Name:84104ab7e1b72f8 MacAddress:72:aa:d6:f9:08:d8 Speed:10000 Mtu:8900} {Name:8663cef33748a7b MacAddress:a6:75:06:bc:6d:e9 Speed:10000 Mtu:8900} {Name:9a4035c483ccb66 MacAddress:5a:5b:ed:e9:68:5c Speed:10000 Mtu:8900} {Name:ae9bffea87b1c17 MacAddress:36:9c:e0:5d:07:4e Speed:10000 Mtu:8900} {Name:b676c70029ef585 MacAddress:36:3b:3d:b3:a0:87 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:1a:b9:9f:27:c4:b9 Speed:0 Mtu:8900} {Name:cf28b7d0809ac17 MacAddress:6e:17:f6:66:77:89 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:6a:59:6a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:5c:d5:0d Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:32:fa:9a:e0:19:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 09 16:25:43.054447 master-0 kubenswrapper[7604]: I0309 16:25:43.054441 7604 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.054570 7604 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.054809 7604 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.054963 7604 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.054990 7604 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.055212 7604 topology_manager.go:138] "Creating topology manager with none policy" Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.055221 7604 container_manager_linux.go:303] "Creating device plugin manager" Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.055229 7604 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 09 16:25:43.055249 master-0 kubenswrapper[7604]: I0309 16:25:43.055249 7604 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 09 16:25:43.055604 master-0 kubenswrapper[7604]: I0309 16:25:43.055450 7604 state_mem.go:36] "Initialized new in-memory state store" Mar 09 16:25:43.055604 master-0 kubenswrapper[7604]: I0309 16:25:43.055562 7604 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 09 16:25:43.055698 master-0 kubenswrapper[7604]: I0309 16:25:43.055684 7604 kubelet.go:418] "Attempting to sync node with API server" Mar 09 16:25:43.055731 master-0 kubenswrapper[7604]: I0309 16:25:43.055704 7604 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 09 16:25:43.055731 master-0 kubenswrapper[7604]: I0309 16:25:43.055720 7604 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 09 16:25:43.055787 master-0 kubenswrapper[7604]: I0309 16:25:43.055730 7604 kubelet.go:324] "Adding apiserver pod source" Mar 09 16:25:43.055787 master-0 kubenswrapper[7604]: I0309 16:25:43.055768 7604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 09 16:25:43.062076 master-0 kubenswrapper[7604]: I0309 16:25:43.062032 7604 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 09 16:25:43.063985 master-0 kubenswrapper[7604]: I0309 16:25:43.063849 7604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 09 16:25:43.064309 master-0 kubenswrapper[7604]: I0309 16:25:43.064273 7604 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 09 16:25:43.064486 master-0 kubenswrapper[7604]: I0309 16:25:43.064454 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 09 16:25:43.064486 master-0 kubenswrapper[7604]: I0309 16:25:43.064476 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 09 16:25:43.064486 master-0 kubenswrapper[7604]: I0309 16:25:43.064485 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064499 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064509 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064518 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064528 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064536 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064548 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064558 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 09 16:25:43.064593 master-0 kubenswrapper[7604]: I0309 16:25:43.064598 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 09 16:25:43.065180 master-0 kubenswrapper[7604]: I0309 16:25:43.065155 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 09 16:25:43.065226 master-0 kubenswrapper[7604]: I0309 16:25:43.065198 7604 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 09 16:25:43.066022 master-0 kubenswrapper[7604]: I0309 16:25:43.065958 7604 server.go:1280] "Started kubelet" Mar 09 16:25:43.066595 master-0 kubenswrapper[7604]: I0309 16:25:43.066545 7604 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 09 16:25:43.066721 master-0 kubenswrapper[7604]: I0309 16:25:43.066620 7604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 09 16:25:43.067707 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 09 16:25:43.070000 master-0 kubenswrapper[7604]: I0309 16:25:43.066732 7604 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 09 16:25:43.071489 master-0 kubenswrapper[7604]: I0309 16:25:43.071467 7604 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 09 16:25:43.074039 master-0 kubenswrapper[7604]: I0309 16:25:43.073985 7604 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 09 16:25:43.077886 master-0 kubenswrapper[7604]: I0309 16:25:43.077293 7604 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 09 16:25:43.078413 master-0 kubenswrapper[7604]: I0309 16:25:43.078374 7604 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 09 16:25:43.078413 master-0 kubenswrapper[7604]: I0309 16:25:43.078409 7604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 09 16:25:43.078562 master-0 kubenswrapper[7604]: I0309 16:25:43.078492 7604 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 10:18:35.443970169 +0000 UTC Mar 09 16:25:43.078562 master-0 kubenswrapper[7604]: I0309 16:25:43.078548 7604 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h52m52.365425093s for next certificate rotation Mar 09 16:25:43.079074 master-0 kubenswrapper[7604]: I0309 16:25:43.078618 7604 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 09 16:25:43.079074 master-0 kubenswrapper[7604]: I0309 16:25:43.078644 7604 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 09 16:25:43.079074 master-0 kubenswrapper[7604]: I0309 16:25:43.078676 7604 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 09 16:25:43.079495 master-0 kubenswrapper[7604]: I0309 16:25:43.079373 7604 server.go:449] "Adding debug handlers to kubelet server" Mar 09 16:25:43.080103 master-0 kubenswrapper[7604]: I0309 16:25:43.079807 7604 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 09 16:25:43.080103 master-0 kubenswrapper[7604]: I0309 16:25:43.079839 7604 factory.go:55] Registering systemd factory Mar 09 16:25:43.080103 master-0 kubenswrapper[7604]: I0309 16:25:43.079856 7604 factory.go:221] Registration of the systemd container factory successfully Mar 09 16:25:43.080934 master-0 kubenswrapper[7604]: I0309 16:25:43.080109 7604 factory.go:153] Registering CRI-O factory Mar 09 16:25:43.080934 master-0 kubenswrapper[7604]: I0309 16:25:43.080291 7604 factory.go:221] Registration of the crio container factory successfully Mar 09 16:25:43.081194 master-0 kubenswrapper[7604]: I0309 16:25:43.081175 7604 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 09 16:25:43.081239 master-0 kubenswrapper[7604]: I0309 16:25:43.081204 7604 factory.go:103] Registering Raw factory Mar 09 16:25:43.081239 master-0 kubenswrapper[7604]: I0309 16:25:43.081217 7604 manager.go:1196] Started watching for new ooms in manager Mar 09 16:25:43.081953 master-0 kubenswrapper[7604]: I0309 16:25:43.081894 7604 manager.go:319] Starting recovery of all containers Mar 09 16:25:43.084264 master-0 kubenswrapper[7604]: I0309 16:25:43.084189 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d15da434-241d-4a93-9ce3-f943d43bf2ce" volumeName="kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb" seLinuxMountContext="" Mar 09 16:25:43.084264 master-0 kubenswrapper[7604]: I0309 16:25:43.084254 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy" seLinuxMountContext="" Mar 09 16:25:43.084264 master-0 kubenswrapper[7604]: I0309 16:25:43.084265 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084274 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084284 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084293 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="457f42a7-f14c-4d61-a87a-bc1ed422feed" volumeName="kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084302 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084311 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084321 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084330 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084339 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084347 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084354 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" volumeName="kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084365 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6912539-9b06-4e2c-b6a8-155df31147f2" volumeName="kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084372 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc732d23-37bc-41c2-9f9b-333ba517c1f8" volumeName="kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g" seLinuxMountContext="" Mar 09 16:25:43.084392 master-0 kubenswrapper[7604]: I0309 16:25:43.084390 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084432 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e97466a-7c33-4efb-a961-14024d913a21" volumeName="kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084442 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084450 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084457 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084469 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a62ba179-443d-424f-8cff-c75677e8cd5c" volumeName="kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084477 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084485 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084493 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" volumeName="kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084502 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" volumeName="kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084510 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a612208-f777-486f-9dde-048b2d898c7f" volumeName="kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084550 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="457f42a7-f14c-4d61-a87a-bc1ed422feed" volumeName="kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084561 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc732d23-37bc-41c2-9f9b-333ba517c1f8" volumeName="kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084598 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084609 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e97466a-7c33-4efb-a961-14024d913a21" volumeName="kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084618 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a612208-f777-486f-9dde-048b2d898c7f" volumeName="kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084628 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" volumeName="kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084639 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="004d1e93-2345-4e62-902c-33f9dbb0f397" volumeName="kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084647 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b9030c9-7f5f-4e54-ae93-140469e3558b" volumeName="kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084658 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b9030c9-7f5f-4e54-ae93-140469e3558b" volumeName="kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084670 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084683 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" volumeName="kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084691 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084699 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084709 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084718 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084728 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="72739f4d-da25-493b-91ef-d2b64e9297dd" volumeName="kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084737 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be86c85d-59b1-4279-8253-a998ca16cd4d" volumeName="kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084746 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" volumeName="kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084755 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.084739 master-0 kubenswrapper[7604]: I0309 16:25:43.084764 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084773 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084783 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e97466a-7c33-4efb-a961-14024d913a21" volumeName="kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084793 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5565c060-5952-4e85-8873-18bb80663924" volumeName="kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084800 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5565c060-5952-4e85-8873-18bb80663924" volumeName="kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084810 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77a20946-c236-417e-8333-6d1aac88bbc2" volumeName="kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084818 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" volumeName="kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084830 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084839 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084848 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="457f42a7-f14c-4d61-a87a-bc1ed422feed" volumeName="kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084857 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084867 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="709aad35-08ca-4ff5-abe5-e1558c8dc83f" volumeName="kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084875 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6912539-9b06-4e2c-b6a8-155df31147f2" volumeName="kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084884 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084893 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="004d1e93-2345-4e62-902c-33f9dbb0f397" volumeName="kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084904 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a4491c-12cc-4531-ad3e-246e93ed7842" volumeName="kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084913 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a612208-f777-486f-9dde-048b2d898c7f" volumeName="kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084921 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" volumeName="kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084929 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084937 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77a20946-c236-417e-8333-6d1aac88bbc2" volumeName="kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084945 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084954 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084961 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2ec8b2-02d7-40c4-ac20-32615d689697" volumeName="kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084969 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef122f26-bfae-44d2-a70a-8507b3b47332" volumeName="kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084977 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084985 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="709aad35-08ca-4ff5-abe5-e1558c8dc83f" volumeName="kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.084993 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" volumeName="kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085002 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085011 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a4491c-12cc-4531-ad3e-246e93ed7842" volumeName="kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085021 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a4491c-12cc-4531-ad3e-246e93ed7842" volumeName="kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085031 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" volumeName="kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085042 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085050 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" volumeName="kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085058 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6912539-9b06-4e2c-b6a8-155df31147f2" volumeName="kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085067 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2ec8b2-02d7-40c4-ac20-32615d689697" volumeName="kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085075 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085084 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bd3c489-427c-4a47-b7b9-5d1611b9be12" volumeName="kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085091 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085101 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" volumeName="kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085110 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2ec8b2-02d7-40c4-ac20-32615d689697" volumeName="kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085118 7604 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f965b971-7e9a-4513-8450-b2b527609bd6" volumeName="kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c" seLinuxMountContext="" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085126 7604 reconstruct.go:97] "Volume reconstruction finished" Mar 09 16:25:43.085612 master-0 kubenswrapper[7604]: I0309 16:25:43.085133 7604 reconciler.go:26] "Reconciler: start to sync state" Mar 09 16:25:43.090502 master-0 kubenswrapper[7604]: I0309 16:25:43.089062 7604 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 09 16:25:43.108236 master-0 kubenswrapper[7604]: I0309 16:25:43.108055 7604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 09 16:25:43.109676 master-0 kubenswrapper[7604]: I0309 16:25:43.109635 7604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 09 16:25:43.109792 master-0 kubenswrapper[7604]: I0309 16:25:43.109681 7604 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 09 16:25:43.109792 master-0 kubenswrapper[7604]: I0309 16:25:43.109715 7604 kubelet.go:2335] "Starting kubelet main sync loop" Mar 09 16:25:43.109792 master-0 kubenswrapper[7604]: E0309 16:25:43.109762 7604 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 09 16:25:43.112270 master-0 kubenswrapper[7604]: I0309 16:25:43.112208 7604 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 09 16:25:43.132667 master-0 kubenswrapper[7604]: I0309 16:25:43.132619 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 09 16:25:43.133037 master-0 kubenswrapper[7604]: I0309 16:25:43.132991 7604 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482" exitCode=1 Mar 09 16:25:43.133037 master-0 kubenswrapper[7604]: I0309 16:25:43.133023 7604 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="3e490c33eb237e7928c7acb6c95a66dd05db37a72075c92f51066f56e730d5ab" exitCode=0 Mar 09 16:25:43.134258 master-0 kubenswrapper[7604]: I0309 16:25:43.134226 7604 generic.go:334] "Generic (PLEG): container finished" podID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerID="5f6392f9e974864cb8a576a8cc4e692a56b1538084351cbc64c608b35b4670f8" exitCode=0 Mar 09 16:25:43.136623 master-0 kubenswrapper[7604]: I0309 16:25:43.136576 7604 generic.go:334] "Generic (PLEG): container finished" podID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerID="8f1a1e060987b820e153c9d0c33ec719e219b362f2873a0c12439e503198da64" exitCode=0 Mar 09 16:25:43.142585 master-0 kubenswrapper[7604]: I0309 16:25:43.142548 7604 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="cd074429ed45f5a8693a7e2dec95a69a0356de57104bf51c86da0531be3d00f3" exitCode=0 Mar 09 16:25:43.146807 master-0 kubenswrapper[7604]: I0309 16:25:43.146762 7604 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22" exitCode=0 Mar 09 16:25:43.154483 master-0 kubenswrapper[7604]: I0309 16:25:43.154410 7604 generic.go:334] "Generic (PLEG): container finished" podID="6d47955b-b85c-4137-9dea-ff0c20d5ab77" containerID="c0b6c146623a62ab0a5823c85168f8b6cd4a93ec0368a37111e0616c32e8f226" exitCode=0 Mar 09 16:25:43.164776 master-0 kubenswrapper[7604]: I0309 16:25:43.164713 7604 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="f92b3bf64fc4165da416ac63f159e2be71d6527248ee0c28520165449adf5e4e" exitCode=0 Mar 09 16:25:43.164776 master-0 kubenswrapper[7604]: I0309 16:25:43.164759 7604 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="db91761f4ed69865df84925e7d692b45a5d00ca5d8cda47d3e02e2821fc11818" exitCode=0 Mar 09 16:25:43.164776 master-0 kubenswrapper[7604]: I0309 16:25:43.164772 7604 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="0e4dffbedd2651da68c4f09131df95460c21cf12adecaf4ed6c71f35a722b888" exitCode=0 Mar 09 16:25:43.164776 master-0 kubenswrapper[7604]: I0309 16:25:43.164784 7604 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="1c8c260da059200c19ff4508a0a4e27c1306ddf0f97c62b30fb7ed75be818372" exitCode=0 Mar 09 16:25:43.165084 master-0 kubenswrapper[7604]: I0309 16:25:43.164794 7604 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="e85c846d70b2880d50adc9dc310cb9743473eb6e96f2c0617b7d1adfb1817ac6" exitCode=0 Mar 09 16:25:43.165084 master-0 kubenswrapper[7604]: I0309 16:25:43.164803 7604 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="e18e252fd560cea1fe0cd7cc5f8a186dd08bab19f2d2e38f70e4a77bd4ec31c0" exitCode=0 Mar 09 16:25:43.209892 master-0 kubenswrapper[7604]: E0309 16:25:43.209814 7604 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 09 16:25:43.213191 master-0 kubenswrapper[7604]: I0309 16:25:43.213155 7604 manager.go:324] Recovery completed Mar 09 16:25:43.262252 master-0 kubenswrapper[7604]: I0309 16:25:43.262143 7604 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 09 16:25:43.262252 master-0 kubenswrapper[7604]: I0309 16:25:43.262175 7604 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 09 16:25:43.262252 master-0 kubenswrapper[7604]: I0309 16:25:43.262216 7604 state_mem.go:36] "Initialized new in-memory state store" Mar 09 16:25:43.262617 master-0 kubenswrapper[7604]: I0309 16:25:43.262582 7604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 09 16:25:43.262657 master-0 kubenswrapper[7604]: I0309 16:25:43.262599 7604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 09 16:25:43.262657 master-0 kubenswrapper[7604]: I0309 16:25:43.262628 7604 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 09 16:25:43.262657 master-0 kubenswrapper[7604]: I0309 16:25:43.262637 7604 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 09 16:25:43.262657 master-0 kubenswrapper[7604]: I0309 16:25:43.262645 7604 policy_none.go:49] "None policy: Start" Mar 09 16:25:43.264447 master-0 kubenswrapper[7604]: I0309 16:25:43.264376 7604 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 09 16:25:43.264447 master-0 kubenswrapper[7604]: I0309 16:25:43.264441 7604 state_mem.go:35] "Initializing new in-memory state store" Mar 09 16:25:43.264727 master-0 kubenswrapper[7604]: I0309 16:25:43.264699 7604 state_mem.go:75] "Updated machine memory state" Mar 09 16:25:43.264727 master-0 kubenswrapper[7604]: I0309 16:25:43.264719 7604 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.273808 7604 manager.go:334] "Starting Device Plugin manager" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.274078 7604 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.274489 7604 server.go:79] "Starting device plugin registration server" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.275011 7604 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.275026 7604 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.275271 7604 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.275732 7604 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 09 16:25:43.276287 master-0 kubenswrapper[7604]: I0309 16:25:43.275744 7604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 09 16:25:43.375383 master-0 kubenswrapper[7604]: I0309 16:25:43.375174 7604 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:25:43.377457 master-0 kubenswrapper[7604]: I0309 16:25:43.377372 7604 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:25:43.377520 master-0 kubenswrapper[7604]: I0309 16:25:43.377511 7604 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:25:43.377552 master-0 kubenswrapper[7604]: I0309 16:25:43.377525 7604 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:25:43.377641 master-0 kubenswrapper[7604]: I0309 16:25:43.377607 7604 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:25:43.392298 master-0 kubenswrapper[7604]: I0309 16:25:43.392230 7604 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 09 16:25:43.392544 master-0 kubenswrapper[7604]: I0309 16:25:43.392352 7604 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 09 16:25:43.410197 master-0 kubenswrapper[7604]: I0309 16:25:43.409955 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:25:43.410965 master-0 kubenswrapper[7604]: I0309 16:25:43.410833 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093"} Mar 09 16:25:43.410965 master-0 kubenswrapper[7604]: I0309 16:25:43.410957 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.410973 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"618310ea694b058d1ad79e1ef0d9913735988a6ed96bb326b74d3f8179a42988"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.410993 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"57f6bbbfcfb537c0879739b1547de923304fd0f8bd8f06701d29220990585d09"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411013 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411028 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"3e490c33eb237e7928c7acb6c95a66dd05db37a72075c92f51066f56e730d5ab"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411040 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411059 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d930c817ef4ae01952139d2036fa03601b517d00c581c9478ada1f7319d378e" Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411078 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf7607bf63c826880c277db5efe1d7b1c54664d8a874cf3cbfd77d87cef3162" Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411091 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613"} Mar 09 16:25:43.411092 master-0 kubenswrapper[7604]: I0309 16:25:43.411103 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"d44b4281b666f32d8647c6a143f074eebe2a44e65e8dee2574808efbf233ffa9"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411114 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"07db56df5935bcf14ea5515353e90f66ca2dfb6085cf1ee6d120e5df2888a136"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411131 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411145 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411154 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411170 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411202 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"e8fcbf086ed08a14966a423a93930e67c1cbd9793017fcea8581f23478898eea"} Mar 09 16:25:43.411329 master-0 kubenswrapper[7604]: I0309 16:25:43.411214 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d"} Mar 09 16:25:43.425937 master-0 kubenswrapper[7604]: W0309 16:25:43.425881 7604 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 09 16:25:43.426022 master-0 kubenswrapper[7604]: E0309 16:25:43.425976 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.426980 master-0 kubenswrapper[7604]: E0309 16:25:43.426950 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.427048 master-0 kubenswrapper[7604]: E0309 16:25:43.426981 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.427048 master-0 kubenswrapper[7604]: E0309 16:25:43.426962 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.427111 master-0 kubenswrapper[7604]: E0309 16:25:43.427032 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.490603 master-0 kubenswrapper[7604]: I0309 16:25:43.490545 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.490603 master-0 kubenswrapper[7604]: I0309 16:25:43.490589 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.490603 master-0 kubenswrapper[7604]: I0309 16:25:43.490610 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.490603 master-0 kubenswrapper[7604]: I0309 16:25:43.490626 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490642 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490657 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490674 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490693 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490715 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490732 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490748 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490764 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490779 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490792 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490807 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490821 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.490983 master-0 kubenswrapper[7604]: I0309 16:25:43.490837 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.591455 master-0 kubenswrapper[7604]: I0309 16:25:43.591379 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.591455 master-0 kubenswrapper[7604]: I0309 16:25:43.591441 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.591455 master-0 kubenswrapper[7604]: I0309 16:25:43.591471 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.591717 master-0 kubenswrapper[7604]: I0309 16:25:43.591496 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.591717 master-0 kubenswrapper[7604]: I0309 16:25:43.591524 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.591717 master-0 kubenswrapper[7604]: I0309 16:25:43.591660 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.591791 master-0 kubenswrapper[7604]: I0309 16:25:43.591708 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.591894 master-0 kubenswrapper[7604]: I0309 16:25:43.591824 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.591931 master-0 kubenswrapper[7604]: I0309 16:25:43.591907 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.591972 master-0 kubenswrapper[7604]: I0309 16:25:43.591954 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:43.592009 master-0 kubenswrapper[7604]: I0309 16:25:43.591975 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.592043 master-0 kubenswrapper[7604]: I0309 16:25:43.592018 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592072 master-0 kubenswrapper[7604]: I0309 16:25:43.592040 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592072 master-0 kubenswrapper[7604]: I0309 16:25:43.592043 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592072 master-0 kubenswrapper[7604]: I0309 16:25:43.592062 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592074 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592079 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592105 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592104 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592123 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592125 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592141 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592125 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592151 master-0 kubenswrapper[7604]: I0309 16:25:43.592154 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592154 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592189 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592171 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592234 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592210 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592248 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592270 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592347 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592413 master-0 kubenswrapper[7604]: I0309 16:25:43.592432 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:43.592678 master-0 kubenswrapper[7604]: I0309 16:25:43.592465 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:44.001711 master-0 kubenswrapper[7604]: I0309 16:25:44.001653 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:44.007253 master-0 kubenswrapper[7604]: I0309 16:25:44.007206 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:44.057185 master-0 kubenswrapper[7604]: I0309 16:25:44.057111 7604 apiserver.go:52] "Watching apiserver" Mar 09 16:25:44.066752 master-0 kubenswrapper[7604]: I0309 16:25:44.066705 7604 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 09 16:25:44.067967 master-0 kubenswrapper[7604]: I0309 16:25:44.067919 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78","openshift-dns-operator/dns-operator-589895fbb7-6sknh","openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9","openshift-multus/multus-additional-cni-plugins-jkhls","openshift-multus/multus-admission-controller-8d675b596-g8n5t","assisted-installer/assisted-installer-controller-rdwtz","openshift-ovn-kubernetes/ovnkube-node-vwgwh","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd","kube-system/bootstrap-kube-scheduler-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n","openshift-network-operator/iptables-alerter-g4tdb","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv","openshift-ingress-operator/ingress-operator-677db989d6-xtmhw","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75","openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj","kube-system/bootstrap-kube-controller-manager-master-0","openshift-multus/network-metrics-daemon-n7slb","openshift-network-operator/network-operator-7c649bf6d4-r82z7","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk","openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9","openshift-etcd/etcd-master-0-master-0","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c","openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4","openshift-multus/multus-gfqq8","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc","openshift-network-node-identity/network-node-identity-nqwd2","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl","openshift-network-diagnostics/network-check-target-ncskk"] Mar 09 16:25:44.068213 master-0 kubenswrapper[7604]: I0309 16:25:44.068190 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:25:44.068540 master-0 kubenswrapper[7604]: I0309 16:25:44.068490 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.071917 master-0 kubenswrapper[7604]: I0309 16:25:44.071868 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.073575 master-0 kubenswrapper[7604]: I0309 16:25:44.073534 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 09 16:25:44.073760 master-0 kubenswrapper[7604]: I0309 16:25:44.073726 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 09 16:25:44.074152 master-0 kubenswrapper[7604]: I0309 16:25:44.074112 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 09 16:25:44.074515 master-0 kubenswrapper[7604]: I0309 16:25:44.074496 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.074667 master-0 kubenswrapper[7604]: I0309 16:25:44.074626 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:25:44.074777 master-0 kubenswrapper[7604]: I0309 16:25:44.074757 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 09 16:25:44.075685 master-0 kubenswrapper[7604]: I0309 16:25:44.075625 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.083347 master-0 kubenswrapper[7604]: I0309 16:25:44.083274 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:44.083596 master-0 kubenswrapper[7604]: I0309 16:25:44.083566 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 09 16:25:44.083780 master-0 kubenswrapper[7604]: I0309 16:25:44.083743 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 09 16:25:44.083851 master-0 kubenswrapper[7604]: I0309 16:25:44.083831 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 09 16:25:44.083922 master-0 kubenswrapper[7604]: I0309 16:25:44.083761 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.084315 master-0 kubenswrapper[7604]: I0309 16:25:44.084297 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 09 16:25:44.084459 master-0 kubenswrapper[7604]: I0309 16:25:44.084411 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 09 16:25:44.084506 master-0 kubenswrapper[7604]: I0309 16:25:44.084460 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 09 16:25:44.084563 master-0 kubenswrapper[7604]: I0309 16:25:44.084460 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 09 16:25:44.084635 master-0 kubenswrapper[7604]: I0309 16:25:44.084544 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 09 16:25:44.084682 master-0 kubenswrapper[7604]: I0309 16:25:44.084627 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.084867 master-0 kubenswrapper[7604]: I0309 16:25:44.084834 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.085155 master-0 kubenswrapper[7604]: I0309 16:25:44.085113 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.085251 master-0 kubenswrapper[7604]: I0309 16:25:44.085223 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.085919 master-0 kubenswrapper[7604]: I0309 16:25:44.085884 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 16:25:44.085919 master-0 kubenswrapper[7604]: I0309 16:25:44.085909 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 09 16:25:44.086036 master-0 kubenswrapper[7604]: I0309 16:25:44.086006 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.086036 master-0 kubenswrapper[7604]: I0309 16:25:44.086024 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 09 16:25:44.086102 master-0 kubenswrapper[7604]: I0309 16:25:44.086025 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.086102 master-0 kubenswrapper[7604]: I0309 16:25:44.086068 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 16:25:44.086453 master-0 kubenswrapper[7604]: I0309 16:25:44.086397 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:44.087306 master-0 kubenswrapper[7604]: I0309 16:25:44.087274 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:44.087390 master-0 kubenswrapper[7604]: I0309 16:25:44.087354 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:44.088246 master-0 kubenswrapper[7604]: I0309 16:25:44.088204 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:44.088674 master-0 kubenswrapper[7604]: I0309 16:25:44.088636 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:44.088895 master-0 kubenswrapper[7604]: I0309 16:25:44.088863 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:44.089009 master-0 kubenswrapper[7604]: I0309 16:25:44.088966 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:25:44.089648 master-0 kubenswrapper[7604]: I0309 16:25:44.089604 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 09 16:25:44.089712 master-0 kubenswrapper[7604]: I0309 16:25:44.089620 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 16:25:44.089936 master-0 kubenswrapper[7604]: I0309 16:25:44.089910 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 09 16:25:44.090052 master-0 kubenswrapper[7604]: I0309 16:25:44.090029 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 09 16:25:44.090370 master-0 kubenswrapper[7604]: I0309 16:25:44.090346 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 09 16:25:44.095332 master-0 kubenswrapper[7604]: I0309 16:25:44.095228 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.095332 master-0 kubenswrapper[7604]: I0309 16:25:44.095301 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:44.095332 master-0 kubenswrapper[7604]: I0309 16:25:44.095329 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.095683 master-0 kubenswrapper[7604]: I0309 16:25:44.095571 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.095683 master-0 kubenswrapper[7604]: I0309 16:25:44.095646 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.095776 master-0 kubenswrapper[7604]: I0309 16:25:44.095681 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:44.095776 master-0 kubenswrapper[7604]: I0309 16:25:44.095712 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.095776 master-0 kubenswrapper[7604]: I0309 16:25:44.095735 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.095776 master-0 kubenswrapper[7604]: I0309 16:25:44.095767 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.095923 master-0 kubenswrapper[7604]: I0309 16:25:44.095795 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:44.095923 master-0 kubenswrapper[7604]: I0309 16:25:44.095822 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:44.095923 master-0 kubenswrapper[7604]: I0309 16:25:44.095848 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:44.095923 master-0 kubenswrapper[7604]: I0309 16:25:44.095877 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.095923 master-0 kubenswrapper[7604]: I0309 16:25:44.095903 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.095933 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.095965 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.095989 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.096008 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.096020 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.096046 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:44.096105 master-0 kubenswrapper[7604]: I0309 16:25:44.096082 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:44.096367 master-0 kubenswrapper[7604]: I0309 16:25:44.096145 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.096367 master-0 kubenswrapper[7604]: I0309 16:25:44.096175 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.096367 master-0 kubenswrapper[7604]: I0309 16:25:44.096202 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.096367 master-0 kubenswrapper[7604]: I0309 16:25:44.096225 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.096367 master-0 kubenswrapper[7604]: I0309 16:25:44.096246 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.096565 master-0 kubenswrapper[7604]: I0309 16:25:44.096484 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:44.096565 master-0 kubenswrapper[7604]: I0309 16:25:44.096542 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:44.096649 master-0 kubenswrapper[7604]: I0309 16:25:44.096562 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.096699 master-0 kubenswrapper[7604]: I0309 16:25:44.096657 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:44.097661 master-0 kubenswrapper[7604]: I0309 16:25:44.097637 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:44.097823 master-0 kubenswrapper[7604]: I0309 16:25:44.097803 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.097888 master-0 kubenswrapper[7604]: I0309 16:25:44.097876 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:44.098172 master-0 kubenswrapper[7604]: I0309 16:25:44.098151 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.098461 master-0 kubenswrapper[7604]: I0309 16:25:44.098442 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:44.098541 master-0 kubenswrapper[7604]: I0309 16:25:44.098514 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:44.098588 master-0 kubenswrapper[7604]: I0309 16:25:44.098561 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.098627 master-0 kubenswrapper[7604]: I0309 16:25:44.098598 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:44.098670 master-0 kubenswrapper[7604]: I0309 16:25:44.098624 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 09 16:25:44.098670 master-0 kubenswrapper[7604]: I0309 16:25:44.098629 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:44.098752 master-0 kubenswrapper[7604]: I0309 16:25:44.098681 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.098752 master-0 kubenswrapper[7604]: I0309 16:25:44.098715 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.098752 master-0 kubenswrapper[7604]: I0309 16:25:44.098747 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.098864 master-0 kubenswrapper[7604]: I0309 16:25:44.098780 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.098864 master-0 kubenswrapper[7604]: I0309 16:25:44.098810 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:44.098864 master-0 kubenswrapper[7604]: I0309 16:25:44.098841 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.098974 master-0 kubenswrapper[7604]: I0309 16:25:44.098876 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.098974 master-0 kubenswrapper[7604]: I0309 16:25:44.098908 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:44.098974 master-0 kubenswrapper[7604]: I0309 16:25:44.098940 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 09 16:25:44.099082 master-0 kubenswrapper[7604]: I0309 16:25:44.098940 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.099119 master-0 kubenswrapper[7604]: I0309 16:25:44.099092 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.099150 master-0 kubenswrapper[7604]: I0309 16:25:44.099127 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.099289 master-0 kubenswrapper[7604]: I0309 16:25:44.099268 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.099355 master-0 kubenswrapper[7604]: I0309 16:25:44.099286 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:44.099528 master-0 kubenswrapper[7604]: I0309 16:25:44.099512 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 09 16:25:44.099740 master-0 kubenswrapper[7604]: I0309 16:25:44.099712 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:44.100151 master-0 kubenswrapper[7604]: I0309 16:25:44.100131 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:44.100331 master-0 kubenswrapper[7604]: I0309 16:25:44.100310 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.100680 master-0 kubenswrapper[7604]: I0309 16:25:44.100643 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:44.100777 master-0 kubenswrapper[7604]: I0309 16:25:44.100730 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.104504 master-0 kubenswrapper[7604]: I0309 16:25:44.104455 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 09 16:25:44.104760 master-0 kubenswrapper[7604]: I0309 16:25:44.104738 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 09 16:25:44.104821 master-0 kubenswrapper[7604]: I0309 16:25:44.104767 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 09 16:25:44.104945 master-0 kubenswrapper[7604]: I0309 16:25:44.104922 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 09 16:25:44.105058 master-0 kubenswrapper[7604]: I0309 16:25:44.105031 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 09 16:25:44.105297 master-0 kubenswrapper[7604]: I0309 16:25:44.105100 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 09 16:25:44.105479 master-0 kubenswrapper[7604]: I0309 16:25:44.105323 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 09 16:25:44.105670 master-0 kubenswrapper[7604]: I0309 16:25:44.105602 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 09 16:25:44.105801 master-0 kubenswrapper[7604]: I0309 16:25:44.105776 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 09 16:25:44.106666 master-0 kubenswrapper[7604]: I0309 16:25:44.106628 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:44.106877 master-0 kubenswrapper[7604]: I0309 16:25:44.106847 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.120311 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.122129 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.122369 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.127949 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.128720 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.129357 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.129622 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.130047 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.130113 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.131079 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.131404 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.131730 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.132206 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.132788 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 09 16:25:44.135479 master-0 kubenswrapper[7604]: I0309 16:25:44.132837 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 09 16:25:44.154576 master-0 kubenswrapper[7604]: I0309 16:25:44.154473 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.157233 master-0 kubenswrapper[7604]: I0309 16:25:44.157019 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 09 16:25:44.157233 master-0 kubenswrapper[7604]: I0309 16:25:44.157221 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 09 16:25:44.157517 master-0 kubenswrapper[7604]: I0309 16:25:44.157355 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.158391 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.158856 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159439 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159565 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159610 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159653 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159771 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159834 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159858 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159940 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.159945 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.160090 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.160164 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.160197 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.160253 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 09 16:25:44.160323 master-0 kubenswrapper[7604]: I0309 16:25:44.160297 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.160408 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.160482 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.160533 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.160586 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.160684 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.160921 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.161013 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.161091 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.161095 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 09 16:25:44.161153 master-0 kubenswrapper[7604]: I0309 16:25:44.161161 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161280 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161437 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161437 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161537 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161574 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161621 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161513 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.161707 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 09 16:25:44.161717 master-0 kubenswrapper[7604]: I0309 16:25:44.159563 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:25:44.162205 master-0 kubenswrapper[7604]: I0309 16:25:44.162080 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 09 16:25:44.162591 master-0 kubenswrapper[7604]: I0309 16:25:44.162535 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 09 16:25:44.162948 master-0 kubenswrapper[7604]: I0309 16:25:44.162923 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 09 16:25:44.163058 master-0 kubenswrapper[7604]: I0309 16:25:44.163037 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 09 16:25:44.164947 master-0 kubenswrapper[7604]: I0309 16:25:44.164914 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 09 16:25:44.165169 master-0 kubenswrapper[7604]: I0309 16:25:44.165129 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 09 16:25:44.171393 master-0 kubenswrapper[7604]: I0309 16:25:44.171334 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 09 16:25:44.173079 master-0 kubenswrapper[7604]: I0309 16:25:44.173020 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 09 16:25:44.174579 master-0 kubenswrapper[7604]: I0309 16:25:44.174550 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 09 16:25:44.176812 master-0 kubenswrapper[7604]: I0309 16:25:44.176666 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 09 16:25:44.177121 master-0 kubenswrapper[7604]: I0309 16:25:44.177059 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.179079 master-0 kubenswrapper[7604]: I0309 16:25:44.179048 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 16:25:44.179521 master-0 kubenswrapper[7604]: I0309 16:25:44.179464 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.182392 master-0 kubenswrapper[7604]: I0309 16:25:44.182370 7604 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 09 16:25:44.199842 master-0 kubenswrapper[7604]: I0309 16:25:44.199797 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.200255 master-0 kubenswrapper[7604]: I0309 16:25:44.200194 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.200442 master-0 kubenswrapper[7604]: I0309 16:25:44.199876 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 09 16:25:44.200442 master-0 kubenswrapper[7604]: I0309 16:25:44.200412 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.200550 master-0 kubenswrapper[7604]: I0309 16:25:44.200391 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:44.200550 master-0 kubenswrapper[7604]: I0309 16:25:44.200506 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.200631 master-0 kubenswrapper[7604]: I0309 16:25:44.200555 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.200631 master-0 kubenswrapper[7604]: I0309 16:25:44.200577 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.200631 master-0 kubenswrapper[7604]: I0309 16:25:44.200582 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.200631 master-0 kubenswrapper[7604]: I0309 16:25:44.200623 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:44.200775 master-0 kubenswrapper[7604]: I0309 16:25:44.200652 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.200775 master-0 kubenswrapper[7604]: I0309 16:25:44.200674 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.200775 master-0 kubenswrapper[7604]: I0309 16:25:44.200724 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.200775 master-0 kubenswrapper[7604]: I0309 16:25:44.200749 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.200775 master-0 kubenswrapper[7604]: I0309 16:25:44.200773 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.200958 master-0 kubenswrapper[7604]: I0309 16:25:44.200794 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.200958 master-0 kubenswrapper[7604]: I0309 16:25:44.200817 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.201341 master-0 kubenswrapper[7604]: I0309 16:25:44.201288 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.201411 master-0 kubenswrapper[7604]: I0309 16:25:44.201369 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.201471 master-0 kubenswrapper[7604]: I0309 16:25:44.201380 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:44.201471 master-0 kubenswrapper[7604]: I0309 16:25:44.201456 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.201602 master-0 kubenswrapper[7604]: I0309 16:25:44.201493 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.201602 master-0 kubenswrapper[7604]: I0309 16:25:44.201384 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.201602 master-0 kubenswrapper[7604]: I0309 16:25:44.201516 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:44.201730 master-0 kubenswrapper[7604]: I0309 16:25:44.201609 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.201730 master-0 kubenswrapper[7604]: I0309 16:25:44.201641 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:44.201730 master-0 kubenswrapper[7604]: I0309 16:25:44.201678 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:44.201730 master-0 kubenswrapper[7604]: I0309 16:25:44.201707 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.201865 master-0 kubenswrapper[7604]: E0309 16:25:44.201802 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:44.201966 master-0 kubenswrapper[7604]: E0309 16:25:44.201902 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.701877639 +0000 UTC m=+1.755847062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:44.201966 master-0 kubenswrapper[7604]: I0309 16:25:44.201947 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:44.202074 master-0 kubenswrapper[7604]: I0309 16:25:44.201973 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.202074 master-0 kubenswrapper[7604]: I0309 16:25:44.202016 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:44.202074 master-0 kubenswrapper[7604]: I0309 16:25:44.202025 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:44.202074 master-0 kubenswrapper[7604]: I0309 16:25:44.202042 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.202074 master-0 kubenswrapper[7604]: I0309 16:25:44.202066 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.202277 master-0 kubenswrapper[7604]: I0309 16:25:44.202090 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.202277 master-0 kubenswrapper[7604]: I0309 16:25:44.202115 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.202277 master-0 kubenswrapper[7604]: I0309 16:25:44.202192 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.202277 master-0 kubenswrapper[7604]: I0309 16:25:44.202216 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.202277 master-0 kubenswrapper[7604]: I0309 16:25:44.202238 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.202277 master-0 kubenswrapper[7604]: I0309 16:25:44.202264 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202295 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202320 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202347 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202368 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202393 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202438 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.202533 master-0 kubenswrapper[7604]: I0309 16:25:44.202504 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202632 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202658 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202662 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202687 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202713 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202735 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202764 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.202799 master-0 kubenswrapper[7604]: I0309 16:25:44.202789 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.203089 master-0 kubenswrapper[7604]: I0309 16:25:44.202812 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.203089 master-0 kubenswrapper[7604]: I0309 16:25:44.202916 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:44.203089 master-0 kubenswrapper[7604]: E0309 16:25:44.202990 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:44.204508 master-0 kubenswrapper[7604]: I0309 16:25:44.204483 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.204570 master-0 kubenswrapper[7604]: E0309 16:25:44.204525 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.704508903 +0000 UTC m=+1.758478326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:44.204800 master-0 kubenswrapper[7604]: I0309 16:25:44.204776 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:44.204917 master-0 kubenswrapper[7604]: I0309 16:25:44.204901 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.205292 master-0 kubenswrapper[7604]: I0309 16:25:44.205072 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.205523 master-0 kubenswrapper[7604]: I0309 16:25:44.205415 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.205701 master-0 kubenswrapper[7604]: I0309 16:25:44.205663 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.206016 master-0 kubenswrapper[7604]: I0309 16:25:44.205711 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:44.206074 master-0 kubenswrapper[7604]: I0309 16:25:44.206034 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.206116 master-0 kubenswrapper[7604]: I0309 16:25:44.206073 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.206116 master-0 kubenswrapper[7604]: I0309 16:25:44.205671 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.206116 master-0 kubenswrapper[7604]: I0309 16:25:44.206111 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.206346 master-0 kubenswrapper[7604]: I0309 16:25:44.206299 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.206463 master-0 kubenswrapper[7604]: I0309 16:25:44.206398 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.206596 master-0 kubenswrapper[7604]: E0309 16:25:44.206569 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:44.206646 master-0 kubenswrapper[7604]: I0309 16:25:44.206625 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:44.206686 master-0 kubenswrapper[7604]: E0309 16:25:44.206672 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.706631323 +0000 UTC m=+1.760600746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:44.206744 master-0 kubenswrapper[7604]: I0309 16:25:44.206705 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:44.206792 master-0 kubenswrapper[7604]: I0309 16:25:44.206749 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:44.206792 master-0 kubenswrapper[7604]: I0309 16:25:44.206752 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.207074 master-0 kubenswrapper[7604]: I0309 16:25:44.207029 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.207129 master-0 kubenswrapper[7604]: I0309 16:25:44.207105 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:44.207257 master-0 kubenswrapper[7604]: I0309 16:25:44.207234 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:44.207327 master-0 kubenswrapper[7604]: I0309 16:25:44.207304 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:44.207442 master-0 kubenswrapper[7604]: I0309 16:25:44.207406 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.207525 master-0 kubenswrapper[7604]: I0309 16:25:44.207496 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.207989 master-0 kubenswrapper[7604]: I0309 16:25:44.207955 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:44.208321 master-0 kubenswrapper[7604]: I0309 16:25:44.208288 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.208412 master-0 kubenswrapper[7604]: I0309 16:25:44.208386 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.208473 master-0 kubenswrapper[7604]: I0309 16:25:44.208465 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.208520 master-0 kubenswrapper[7604]: I0309 16:25:44.208507 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.208557 master-0 kubenswrapper[7604]: I0309 16:25:44.208544 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.208726 master-0 kubenswrapper[7604]: I0309 16:25:44.208692 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.208815 master-0 kubenswrapper[7604]: I0309 16:25:44.208780 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.208907 master-0 kubenswrapper[7604]: I0309 16:25:44.208876 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.208962 master-0 kubenswrapper[7604]: I0309 16:25:44.208920 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.209004 master-0 kubenswrapper[7604]: I0309 16:25:44.208991 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:44.209054 master-0 kubenswrapper[7604]: I0309 16:25:44.208911 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.209054 master-0 kubenswrapper[7604]: I0309 16:25:44.209025 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.214221 master-0 kubenswrapper[7604]: I0309 16:25:44.214107 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.214623 master-0 kubenswrapper[7604]: I0309 16:25:44.214539 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.214623 master-0 kubenswrapper[7604]: E0309 16:25:44.214571 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:44.214710 master-0 kubenswrapper[7604]: E0309 16:25:44.214671 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.714642088 +0000 UTC m=+1.768611511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:44.214758 master-0 kubenswrapper[7604]: I0309 16:25:44.214714 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.214802 master-0 kubenswrapper[7604]: I0309 16:25:44.214775 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.214910 master-0 kubenswrapper[7604]: I0309 16:25:44.214838 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.214910 master-0 kubenswrapper[7604]: E0309 16:25:44.214890 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:44.215018 master-0 kubenswrapper[7604]: E0309 16:25:44.214981 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.714972247 +0000 UTC m=+1.768941670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:44.215053 master-0 kubenswrapper[7604]: I0309 16:25:44.215011 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.215088 master-0 kubenswrapper[7604]: E0309 16:25:44.215072 7604 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:44.215174 master-0 kubenswrapper[7604]: I0309 16:25:44.215144 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:44.215174 master-0 kubenswrapper[7604]: E0309 16:25:44.215159 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.715137682 +0000 UTC m=+1.769107105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:44.215374 master-0 kubenswrapper[7604]: I0309 16:25:44.215320 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.215619 master-0 kubenswrapper[7604]: I0309 16:25:44.215516 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.215619 master-0 kubenswrapper[7604]: I0309 16:25:44.215598 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215687 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215730 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215754 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215778 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215869 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215908 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.215953 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.216062 master-0 kubenswrapper[7604]: I0309 16:25:44.216034 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.216460 master-0 kubenswrapper[7604]: I0309 16:25:44.216083 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.216460 master-0 kubenswrapper[7604]: I0309 16:25:44.216227 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.216460 master-0 kubenswrapper[7604]: I0309 16:25:44.216273 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.216555 master-0 kubenswrapper[7604]: I0309 16:25:44.216460 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.216555 master-0 kubenswrapper[7604]: I0309 16:25:44.216501 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:44.216610 master-0 kubenswrapper[7604]: I0309 16:25:44.216577 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.216646 master-0 kubenswrapper[7604]: I0309 16:25:44.216615 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.216703 master-0 kubenswrapper[7604]: I0309 16:25:44.216637 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:44.216703 master-0 kubenswrapper[7604]: I0309 16:25:44.216690 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:44.216765 master-0 kubenswrapper[7604]: I0309 16:25:44.216748 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.216981 master-0 kubenswrapper[7604]: I0309 16:25:44.216801 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.216981 master-0 kubenswrapper[7604]: I0309 16:25:44.216853 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.216981 master-0 kubenswrapper[7604]: I0309 16:25:44.216904 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.216981 master-0 kubenswrapper[7604]: I0309 16:25:44.216947 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:44.217275 master-0 kubenswrapper[7604]: I0309 16:25:44.217038 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:44.218535 master-0 kubenswrapper[7604]: I0309 16:25:44.218419 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:44.218645 master-0 kubenswrapper[7604]: I0309 16:25:44.218615 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.218939 master-0 kubenswrapper[7604]: I0309 16:25:44.218817 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219044 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219077 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219098 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219126 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219163 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219188 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219235 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:44.219654 master-0 kubenswrapper[7604]: I0309 16:25:44.219571 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:44.220203 master-0 kubenswrapper[7604]: I0309 16:25:44.219928 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 09 16:25:44.227978 master-0 kubenswrapper[7604]: I0309 16:25:44.227912 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.239189 master-0 kubenswrapper[7604]: I0309 16:25:44.239106 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 09 16:25:44.240870 master-0 kubenswrapper[7604]: I0309 16:25:44.240829 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.259737 master-0 kubenswrapper[7604]: I0309 16:25:44.259568 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 09 16:25:44.279821 master-0 kubenswrapper[7604]: I0309 16:25:44.279755 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 09 16:25:44.286619 master-0 kubenswrapper[7604]: I0309 16:25:44.286559 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:44.300323 master-0 kubenswrapper[7604]: I0309 16:25:44.300256 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 16:25:44.301383 master-0 kubenswrapper[7604]: I0309 16:25:44.301344 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.319910 master-0 kubenswrapper[7604]: I0309 16:25:44.319863 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:44.319910 master-0 kubenswrapper[7604]: I0309 16:25:44.319905 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.319910 master-0 kubenswrapper[7604]: I0309 16:25:44.319925 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.319945 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.319974 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: E0309 16:25:44.320096 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: E0309 16:25:44.320188 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.820165969 +0000 UTC m=+1.874135392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320254 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320322 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320350 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320395 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320466 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320482 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320580 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: E0309 16:25:44.320598 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320613 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320635 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: E0309 16:25:44.320651 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.820631201 +0000 UTC m=+1.874600624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320662 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320671 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320697 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: E0309 16:25:44.320705 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320719 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: E0309 16:25:44.320734 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.820724974 +0000 UTC m=+1.874694517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320767 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320789 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320818 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320819 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320891 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.320869 master-0 kubenswrapper[7604]: I0309 16:25:44.320910 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.320939 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.320947 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.320973 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.320993 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321018 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.320998 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.320999 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321036 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321064 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321077 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321090 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.821065923 +0000 UTC m=+1.875035456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321113 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321136 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321153 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321163 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321180 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.821171946 +0000 UTC m=+1.875141489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321203 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321224 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321239 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321266 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321279 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321302 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321309 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.82130132 +0000 UTC m=+1.875270733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321345 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321358 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321381 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.821375422 +0000 UTC m=+1.875344845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321355 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321410 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321455 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321469 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.821457964 +0000 UTC m=+1.875427497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321492 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321497 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321417 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321521 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: E0309 16:25:44.321533 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:44.821527106 +0000 UTC m=+1.875496529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321542 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321604 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321645 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321650 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321688 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321706 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321718 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321729 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321690 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321749 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321802 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321823 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321838 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321843 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321867 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321874 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321892 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321917 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321942 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321967 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.321990 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.322008 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.321920 master-0 kubenswrapper[7604]: I0309 16:25:44.322012 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322047 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322272 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322308 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322345 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322348 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322399 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322456 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322511 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.322590 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:44.324352 master-0 kubenswrapper[7604]: I0309 16:25:44.323871 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:44.352215 master-0 kubenswrapper[7604]: I0309 16:25:44.351951 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:25:44.374964 master-0 kubenswrapper[7604]: I0309 16:25:44.373035 7604 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 16:25:44.374964 master-0 kubenswrapper[7604]: I0309 16:25:44.374064 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.393313 master-0 kubenswrapper[7604]: I0309 16:25:44.393226 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.413295 master-0 kubenswrapper[7604]: I0309 16:25:44.413223 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.432056 master-0 kubenswrapper[7604]: I0309 16:25:44.431974 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:25:44.451495 master-0 kubenswrapper[7604]: I0309 16:25:44.451448 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:25:44.471439 master-0 kubenswrapper[7604]: I0309 16:25:44.471348 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:25:44.491733 master-0 kubenswrapper[7604]: I0309 16:25:44.491657 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:44.511480 master-0 kubenswrapper[7604]: I0309 16:25:44.511323 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:25:44.533558 master-0 kubenswrapper[7604]: I0309 16:25:44.533495 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.551444 master-0 kubenswrapper[7604]: I0309 16:25:44.551376 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:25:44.570526 master-0 kubenswrapper[7604]: I0309 16:25:44.570462 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.592177 master-0 kubenswrapper[7604]: I0309 16:25:44.591473 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:25:44.627241 master-0 kubenswrapper[7604]: E0309 16:25:44.627171 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:44.646585 master-0 kubenswrapper[7604]: W0309 16:25:44.646496 7604 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 09 16:25:44.646802 master-0 kubenswrapper[7604]: E0309 16:25:44.646618 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:25:44.666239 master-0 kubenswrapper[7604]: E0309 16:25:44.666182 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:25:44.688537 master-0 kubenswrapper[7604]: E0309 16:25:44.688492 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:44.707687 master-0 kubenswrapper[7604]: E0309 16:25:44.707634 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:25:44.727803 master-0 kubenswrapper[7604]: I0309 16:25:44.727730 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:44.727803 master-0 kubenswrapper[7604]: I0309 16:25:44.727792 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.728065 master-0 kubenswrapper[7604]: I0309 16:25:44.727844 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:44.728065 master-0 kubenswrapper[7604]: I0309 16:25:44.727894 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:44.728065 master-0 kubenswrapper[7604]: I0309 16:25:44.727926 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:44.728065 master-0 kubenswrapper[7604]: I0309 16:25:44.727951 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:44.728179 master-0 kubenswrapper[7604]: E0309 16:25:44.728105 7604 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:44.728179 master-0 kubenswrapper[7604]: E0309 16:25:44.728144 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.728132183 +0000 UTC m=+2.782101606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:44.728963 master-0 kubenswrapper[7604]: E0309 16:25:44.728606 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:44.728963 master-0 kubenswrapper[7604]: E0309 16:25:44.728762 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:44.728963 master-0 kubenswrapper[7604]: E0309 16:25:44.728824 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:44.728963 master-0 kubenswrapper[7604]: E0309 16:25:44.728888 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:44.728963 master-0 kubenswrapper[7604]: E0309 16:25:44.728920 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:44.728963 master-0 kubenswrapper[7604]: E0309 16:25:44.728961 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.728634227 +0000 UTC m=+2.782603650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:44.729209 master-0 kubenswrapper[7604]: E0309 16:25:44.728980 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.728969527 +0000 UTC m=+2.782938960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:44.729209 master-0 kubenswrapper[7604]: E0309 16:25:44.728994 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.728988108 +0000 UTC m=+2.782957531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:44.729209 master-0 kubenswrapper[7604]: E0309 16:25:44.729007 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.729001188 +0000 UTC m=+2.782970611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:44.729209 master-0 kubenswrapper[7604]: E0309 16:25:44.729019 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.729013118 +0000 UTC m=+2.782982541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:44.737546 master-0 kubenswrapper[7604]: I0309 16:25:44.737497 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:25:44.755547 master-0 kubenswrapper[7604]: I0309 16:25:44.754496 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:44.771235 master-0 kubenswrapper[7604]: I0309 16:25:44.771113 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:44.792448 master-0 kubenswrapper[7604]: I0309 16:25:44.792362 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:25:44.810842 master-0 kubenswrapper[7604]: I0309 16:25:44.810797 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:25:44.828871 master-0 kubenswrapper[7604]: I0309 16:25:44.828814 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:44.828871 master-0 kubenswrapper[7604]: I0309 16:25:44.828860 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.828890 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.828913 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.828950 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.828976 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.828993 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.829016 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:44.829083 master-0 kubenswrapper[7604]: I0309 16:25:44.829038 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.829291 master-0 kubenswrapper[7604]: E0309 16:25:44.829227 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:44.829291 master-0 kubenswrapper[7604]: E0309 16:25:44.829277 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.82925972 +0000 UTC m=+2.883229143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:44.829655 master-0 kubenswrapper[7604]: E0309 16:25:44.829632 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:44.829689 master-0 kubenswrapper[7604]: E0309 16:25:44.829668 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829659122 +0000 UTC m=+2.883628545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:44.829732 master-0 kubenswrapper[7604]: E0309 16:25:44.829701 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:44.829732 master-0 kubenswrapper[7604]: E0309 16:25:44.829722 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829715773 +0000 UTC m=+2.883685196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:44.829799 master-0 kubenswrapper[7604]: E0309 16:25:44.829752 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:44.829799 master-0 kubenswrapper[7604]: E0309 16:25:44.829769 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829764515 +0000 UTC m=+2.883733938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:44.829799 master-0 kubenswrapper[7604]: E0309 16:25:44.829797 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:44.829879 master-0 kubenswrapper[7604]: E0309 16:25:44.829813 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829807626 +0000 UTC m=+2.883777049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:44.829879 master-0 kubenswrapper[7604]: E0309 16:25:44.829840 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:44.829879 master-0 kubenswrapper[7604]: E0309 16:25:44.829858 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829851997 +0000 UTC m=+2.883821410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:44.829961 master-0 kubenswrapper[7604]: E0309 16:25:44.829888 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:44.829961 master-0 kubenswrapper[7604]: E0309 16:25:44.829905 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829900018 +0000 UTC m=+2.883869441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:44.829961 master-0 kubenswrapper[7604]: E0309 16:25:44.829933 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:44.829961 master-0 kubenswrapper[7604]: E0309 16:25:44.829950 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.82994488 +0000 UTC m=+2.883914303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:44.830066 master-0 kubenswrapper[7604]: E0309 16:25:44.829978 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:44.830066 master-0 kubenswrapper[7604]: E0309 16:25:44.829995 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:45.829990171 +0000 UTC m=+2.883959594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:44.831958 master-0 kubenswrapper[7604]: I0309 16:25:44.831934 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:44.872258 master-0 kubenswrapper[7604]: I0309 16:25:44.868339 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:44.877405 master-0 kubenswrapper[7604]: I0309 16:25:44.877319 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:44.892549 master-0 kubenswrapper[7604]: I0309 16:25:44.892513 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:25:44.911149 master-0 kubenswrapper[7604]: I0309 16:25:44.911084 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:44.931500 master-0 kubenswrapper[7604]: I0309 16:25:44.931462 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:44.952135 master-0 kubenswrapper[7604]: I0309 16:25:44.952071 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:25:44.974499 master-0 kubenswrapper[7604]: I0309 16:25:44.974452 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:25:44.993175 master-0 kubenswrapper[7604]: I0309 16:25:44.993138 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:45.004867 master-0 kubenswrapper[7604]: E0309 16:25:45.004770 7604 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:f14a5fe619812068a28d20e24d16e4a25c2cf591430257f414c314e4bcf51119: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:f14a5fe619812068a28d20e24d16e4a25c2cf591430257f414c314e4bcf51119\": context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5" Mar 09 16:25:45.005384 master-0 kubenswrapper[7604]: E0309 16:25:45.005016 7604 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-497s5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-64488f9d78-xzwh9_openshift-config-operator(457f42a7-f14c-4d61-a87a-bc1ed422feed): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:f14a5fe619812068a28d20e24d16e4a25c2cf591430257f414c314e4bcf51119: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:f14a5fe619812068a28d20e24d16e4a25c2cf591430257f414c314e4bcf51119\": context canceled" logger="UnhandledError" Mar 09 16:25:45.006303 master-0 kubenswrapper[7604]: E0309 16:25:45.006255 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:f14a5fe619812068a28d20e24d16e4a25c2cf591430257f414c314e4bcf51119: Get \\\"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:f14a5fe619812068a28d20e24d16e4a25c2cf591430257f414c314e4bcf51119\\\": context canceled\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" Mar 09 16:25:45.011639 master-0 kubenswrapper[7604]: I0309 16:25:45.011594 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:45.017058 master-0 kubenswrapper[7604]: I0309 16:25:45.016991 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:25:45.020591 master-0 kubenswrapper[7604]: E0309 16:25:45.020543 7604 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3" Mar 09 16:25:45.020749 master-0 kubenswrapper[7604]: E0309 16:25:45.020701 7604 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5e9989ee0577e930adcd97085176343a881bf92537dda1bf0325a3b1faf96d6,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z242f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000160000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-5685fbc7d-t42zc_openshift-cluster-storage-operator(a62ba179-443d-424f-8cff-c75677e8cd5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 16:25:45.022468 master-0 kubenswrapper[7604]: E0309 16:25:45.022095 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" podUID="a62ba179-443d-424f-8cff-c75677e8cd5c" Mar 09 16:25:45.030758 master-0 kubenswrapper[7604]: I0309 16:25:45.030711 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:45.052029 master-0 kubenswrapper[7604]: I0309 16:25:45.051980 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:25:45.056695 master-0 kubenswrapper[7604]: I0309 16:25:45.056616 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:45.062505 master-0 kubenswrapper[7604]: I0309 16:25:45.062432 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:45.072044 master-0 kubenswrapper[7604]: I0309 16:25:45.071930 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:25:45.093795 master-0 kubenswrapper[7604]: I0309 16:25:45.093714 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:45.115800 master-0 kubenswrapper[7604]: I0309 16:25:45.115749 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:25:45.131605 master-0 kubenswrapper[7604]: I0309 16:25:45.131532 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:25:45.151485 master-0 kubenswrapper[7604]: I0309 16:25:45.151396 7604 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 09 16:25:45.156567 master-0 kubenswrapper[7604]: I0309 16:25:45.156297 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:45.289689 master-0 kubenswrapper[7604]: I0309 16:25:45.289626 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:45.599479 master-0 kubenswrapper[7604]: I0309 16:25:45.599353 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:45.621377 master-0 kubenswrapper[7604]: I0309 16:25:45.621296 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:45.702637 master-0 kubenswrapper[7604]: E0309 16:25:45.702585 7604 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" Mar 09 16:25:45.702806 master-0 kubenswrapper[7604]: E0309 16:25:45.702750 7604 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv95c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-69b6fc6b88-j99pw_openshift-service-ca-operator(a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 09 16:25:45.704024 master-0 kubenswrapper[7604]: E0309 16:25:45.703979 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" podUID="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" Mar 09 16:25:45.738634 master-0 kubenswrapper[7604]: I0309 16:25:45.738575 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:45.738713 master-0 kubenswrapper[7604]: I0309 16:25:45.738661 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:45.738713 master-0 kubenswrapper[7604]: I0309 16:25:45.738694 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:45.738771 master-0 kubenswrapper[7604]: I0309 16:25:45.738739 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:45.738771 master-0 kubenswrapper[7604]: I0309 16:25:45.738762 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:45.738828 master-0 kubenswrapper[7604]: I0309 16:25:45.738779 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:45.738996 master-0 kubenswrapper[7604]: E0309 16:25:45.738964 7604 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:45.739033 master-0 kubenswrapper[7604]: E0309 16:25:45.739026 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.739008582 +0000 UTC m=+4.792977995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:45.739089 master-0 kubenswrapper[7604]: E0309 16:25:45.739073 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:45.739121 master-0 kubenswrapper[7604]: E0309 16:25:45.739102 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.739092824 +0000 UTC m=+4.793062247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:45.739172 master-0 kubenswrapper[7604]: E0309 16:25:45.739155 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:45.739260 master-0 kubenswrapper[7604]: E0309 16:25:45.739182 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.739175967 +0000 UTC m=+4.793145380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:45.739260 master-0 kubenswrapper[7604]: E0309 16:25:45.739227 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:45.739260 master-0 kubenswrapper[7604]: E0309 16:25:45.739245 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.739239338 +0000 UTC m=+4.793208761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:45.739349 master-0 kubenswrapper[7604]: E0309 16:25:45.739282 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:45.739349 master-0 kubenswrapper[7604]: E0309 16:25:45.739306 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.73929617 +0000 UTC m=+4.793265593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:45.739484 master-0 kubenswrapper[7604]: E0309 16:25:45.739352 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:45.739484 master-0 kubenswrapper[7604]: E0309 16:25:45.739377 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.739370092 +0000 UTC m=+4.793339515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:45.844342 master-0 kubenswrapper[7604]: I0309 16:25:45.841142 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:45.844342 master-0 kubenswrapper[7604]: E0309 16:25:45.841469 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:45.844481 master-0 kubenswrapper[7604]: E0309 16:25:45.844439 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.844393589 +0000 UTC m=+4.898363172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:45.844608 master-0 kubenswrapper[7604]: E0309 16:25:45.844546 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:45.844665 master-0 kubenswrapper[7604]: E0309 16:25:45.844631 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.844618665 +0000 UTC m=+4.898588088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:45.844712 master-0 kubenswrapper[7604]: I0309 16:25:45.844667 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:45.844790 master-0 kubenswrapper[7604]: I0309 16:25:45.844764 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:45.844903 master-0 kubenswrapper[7604]: I0309 16:25:45.844861 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:45.845005 master-0 kubenswrapper[7604]: E0309 16:25:45.844972 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:45.845052 master-0 kubenswrapper[7604]: E0309 16:25:45.845035 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845022056 +0000 UTC m=+4.898991649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:45.845052 master-0 kubenswrapper[7604]: E0309 16:25:45.844997 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:45.845130 master-0 kubenswrapper[7604]: I0309 16:25:45.845059 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:45.845130 master-0 kubenswrapper[7604]: E0309 16:25:45.845099 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845067107 +0000 UTC m=+4.899036600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:45.845130 master-0 kubenswrapper[7604]: I0309 16:25:45.845124 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:45.845241 master-0 kubenswrapper[7604]: E0309 16:25:45.845131 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:45.845241 master-0 kubenswrapper[7604]: I0309 16:25:45.845184 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:45.845241 master-0 kubenswrapper[7604]: E0309 16:25:45.845203 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845192031 +0000 UTC m=+4.899161534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:45.845241 master-0 kubenswrapper[7604]: E0309 16:25:45.845227 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:45.845408 master-0 kubenswrapper[7604]: E0309 16:25:45.845170 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:45.845408 master-0 kubenswrapper[7604]: E0309 16:25:45.845267 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845252822 +0000 UTC m=+4.899222245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:45.845408 master-0 kubenswrapper[7604]: I0309 16:25:45.845283 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:45.845408 master-0 kubenswrapper[7604]: E0309 16:25:45.845288 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845278643 +0000 UTC m=+4.899248066 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:45.845408 master-0 kubenswrapper[7604]: E0309 16:25:45.845326 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:45.845408 master-0 kubenswrapper[7604]: E0309 16:25:45.845347 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845341855 +0000 UTC m=+4.899311278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:45.845950 master-0 kubenswrapper[7604]: I0309 16:25:45.845541 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:45.845950 master-0 kubenswrapper[7604]: E0309 16:25:45.845670 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:45.845950 master-0 kubenswrapper[7604]: E0309 16:25:45.845711 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:47.845699815 +0000 UTC m=+4.899669418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:45.985340 master-0 kubenswrapper[7604]: I0309 16:25:45.985298 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-ncskk"] Mar 09 16:25:46.063493 master-0 kubenswrapper[7604]: W0309 16:25:46.057580 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7937ccab_a6fb_4401_a4fd_7a2a91a7193f.slice/crio-e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8 WatchSource:0}: Error finding container e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8: Status 404 returned error can't find the container with id e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8 Mar 09 16:25:46.189835 master-0 kubenswrapper[7604]: I0309 16:25:46.189782 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" event={"ID":"34a4491c-12cc-4531-ad3e-246e93ed7842","Type":"ContainerStarted","Data":"fa5ddd5802e33c8a6619b86d4545b8a3364c98e851507c10917062099a64157c"} Mar 09 16:25:46.196651 master-0 kubenswrapper[7604]: I0309 16:25:46.196604 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" event={"ID":"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d","Type":"ContainerStarted","Data":"8fc1f9c122b644d42570f9573ceb86c8b66b157aee149e8b75a17dc9c0fc5570"} Mar 09 16:25:46.202414 master-0 kubenswrapper[7604]: I0309 16:25:46.202355 7604 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="d7922052b68455850c77125803bb69415764501411377535a0999663fe5a312c" exitCode=0 Mar 09 16:25:46.202583 master-0 kubenswrapper[7604]: I0309 16:25:46.202504 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerDied","Data":"d7922052b68455850c77125803bb69415764501411377535a0999663fe5a312c"} Mar 09 16:25:46.205501 master-0 kubenswrapper[7604]: I0309 16:25:46.205468 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" event={"ID":"d2d3c20a-f92e-433b-9fbc-b667b7bcf175","Type":"ContainerStarted","Data":"cc3b26ecc6db80d8920394a2785316da766a94e7ed17c29a0dba7776c2765c20"} Mar 09 16:25:46.207937 master-0 kubenswrapper[7604]: I0309 16:25:46.207520 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" event={"ID":"6cf9eae5-38bc-48fa-8339-d0751bb18e8c","Type":"ContainerStarted","Data":"1e5e32f0f63434eb2622b072a5c0a325920460736fce227cb33b7dd8fc950069"} Mar 09 16:25:46.231087 master-0 kubenswrapper[7604]: I0309 16:25:46.231027 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" event={"ID":"e2e38be5-1d33-4171-b27f-78a335f1590b","Type":"ContainerStarted","Data":"aae9b4fa27818489ab82742a1d088f45fbd99626e96c87f0d251b8c8d0c8bce4"} Mar 09 16:25:46.236682 master-0 kubenswrapper[7604]: I0309 16:25:46.236636 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" event={"ID":"3a612208-f777-486f-9dde-048b2d898c7f","Type":"ContainerStarted","Data":"a68cd08d6d3f33869738052123770a9d77db899c72df9e881a8184753514b484"} Mar 09 16:25:46.238538 master-0 kubenswrapper[7604]: I0309 16:25:46.238450 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" event={"ID":"6c4dfdcc-e182-4831-98e4-1eedb069bcf6","Type":"ContainerStarted","Data":"7c3fbf08ff6da10a25d918bd4cbabfd4c79ce8ba8a9c8a411b80c1c351bae8a7"} Mar 09 16:25:46.240917 master-0 kubenswrapper[7604]: I0309 16:25:46.240884 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ncskk" event={"ID":"7937ccab-a6fb-4401-a4fd-7a2a91a7193f","Type":"ContainerStarted","Data":"b7d8c38ad900bde149468b3baf31aa9943993455b006da37256803f60d9cf144"} Mar 09 16:25:46.240982 master-0 kubenswrapper[7604]: I0309 16:25:46.240924 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ncskk" event={"ID":"7937ccab-a6fb-4401-a4fd-7a2a91a7193f","Type":"ContainerStarted","Data":"e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8"} Mar 09 16:25:46.240982 master-0 kubenswrapper[7604]: I0309 16:25:46.240978 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:25:47.181286 master-0 kubenswrapper[7604]: I0309 16:25:47.180817 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:47.195447 master-0 kubenswrapper[7604]: I0309 16:25:47.195395 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:47.260719 master-0 kubenswrapper[7604]: I0309 16:25:47.260684 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:25:47.261245 master-0 kubenswrapper[7604]: I0309 16:25:47.261230 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:25:47.261330 master-0 kubenswrapper[7604]: I0309 16:25:47.261318 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:25:47.691533 master-0 kubenswrapper[7604]: I0309 16:25:47.691433 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:47.722028 master-0 kubenswrapper[7604]: I0309 16:25:47.721962 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:25:47.780459 master-0 kubenswrapper[7604]: I0309 16:25:47.780201 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:47.780459 master-0 kubenswrapper[7604]: I0309 16:25:47.780271 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:47.780693 master-0 kubenswrapper[7604]: E0309 16:25:47.780478 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:47.780693 master-0 kubenswrapper[7604]: E0309 16:25:47.780572 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.780547865 +0000 UTC m=+8.834517358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:47.780693 master-0 kubenswrapper[7604]: I0309 16:25:47.780653 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:47.780814 master-0 kubenswrapper[7604]: I0309 16:25:47.780696 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:47.780814 master-0 kubenswrapper[7604]: I0309 16:25:47.780721 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:47.780814 master-0 kubenswrapper[7604]: I0309 16:25:47.780744 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:47.780897 master-0 kubenswrapper[7604]: E0309 16:25:47.780839 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:47.780897 master-0 kubenswrapper[7604]: E0309 16:25:47.780862 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.780855784 +0000 UTC m=+8.834825207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:47.781230 master-0 kubenswrapper[7604]: E0309 16:25:47.781205 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:47.781286 master-0 kubenswrapper[7604]: E0309 16:25:47.781237 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.781230354 +0000 UTC m=+8.835199777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:47.781286 master-0 kubenswrapper[7604]: E0309 16:25:47.781278 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:47.781348 master-0 kubenswrapper[7604]: E0309 16:25:47.781295 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.781289736 +0000 UTC m=+8.835259159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:47.781348 master-0 kubenswrapper[7604]: E0309 16:25:47.781328 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:47.781348 master-0 kubenswrapper[7604]: E0309 16:25:47.781345 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.781338817 +0000 UTC m=+8.835308240 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:47.781444 master-0 kubenswrapper[7604]: E0309 16:25:47.781375 7604 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:47.781444 master-0 kubenswrapper[7604]: E0309 16:25:47.781393 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.781387858 +0000 UTC m=+8.835357281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:47.881489 master-0 kubenswrapper[7604]: I0309 16:25:47.881387 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:47.881489 master-0 kubenswrapper[7604]: I0309 16:25:47.881487 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:47.881699 master-0 kubenswrapper[7604]: I0309 16:25:47.881521 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:47.881699 master-0 kubenswrapper[7604]: E0309 16:25:47.881638 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:47.881699 master-0 kubenswrapper[7604]: E0309 16:25:47.881681 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:47.881810 master-0 kubenswrapper[7604]: E0309 16:25:47.881636 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:47.881810 master-0 kubenswrapper[7604]: E0309 16:25:47.881702 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.881682382 +0000 UTC m=+8.935651805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:47.881810 master-0 kubenswrapper[7604]: E0309 16:25:47.881768 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.881748694 +0000 UTC m=+8.935718107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:47.881810 master-0 kubenswrapper[7604]: E0309 16:25:47.881787 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.881779635 +0000 UTC m=+8.935749058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:47.881991 master-0 kubenswrapper[7604]: I0309 16:25:47.881815 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:47.881991 master-0 kubenswrapper[7604]: I0309 16:25:47.881879 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:47.881991 master-0 kubenswrapper[7604]: E0309 16:25:47.881983 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:47.882078 master-0 kubenswrapper[7604]: E0309 16:25:47.882005 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.881998221 +0000 UTC m=+8.935967644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:47.882078 master-0 kubenswrapper[7604]: E0309 16:25:47.882021 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:47.882131 master-0 kubenswrapper[7604]: E0309 16:25:47.882100 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.882081543 +0000 UTC m=+8.936050956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:47.882164 master-0 kubenswrapper[7604]: I0309 16:25:47.882128 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:47.882192 master-0 kubenswrapper[7604]: I0309 16:25:47.882168 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:47.882218 master-0 kubenswrapper[7604]: I0309 16:25:47.882200 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:47.882248 master-0 kubenswrapper[7604]: I0309 16:25:47.882223 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:47.882276 master-0 kubenswrapper[7604]: E0309 16:25:47.882262 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:47.882310 master-0 kubenswrapper[7604]: E0309 16:25:47.882293 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.882284259 +0000 UTC m=+8.936253762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:47.882342 master-0 kubenswrapper[7604]: E0309 16:25:47.882318 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:47.882342 master-0 kubenswrapper[7604]: E0309 16:25:47.882338 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:47.882393 master-0 kubenswrapper[7604]: E0309 16:25:47.882345 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:47.882393 master-0 kubenswrapper[7604]: E0309 16:25:47.882351 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.88234113 +0000 UTC m=+8.936310623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:47.882393 master-0 kubenswrapper[7604]: E0309 16:25:47.882371 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.882362321 +0000 UTC m=+8.936331864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:47.882393 master-0 kubenswrapper[7604]: E0309 16:25:47.882388 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.882380471 +0000 UTC m=+8.936349984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:48.264928 master-0 kubenswrapper[7604]: I0309 16:25:48.264395 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:25:48.345975 master-0 kubenswrapper[7604]: I0309 16:25:48.345440 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:48.356914 master-0 kubenswrapper[7604]: I0309 16:25:48.356835 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:25:48.470502 master-0 kubenswrapper[7604]: I0309 16:25:48.470434 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:25:48.511066 master-0 kubenswrapper[7604]: I0309 16:25:48.511007 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54"] Mar 09 16:25:48.511308 master-0 kubenswrapper[7604]: E0309 16:25:48.511184 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:25:48.511308 master-0 kubenswrapper[7604]: I0309 16:25:48.511201 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:25:48.511308 master-0 kubenswrapper[7604]: E0309 16:25:48.511217 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerName="prober" Mar 09 16:25:48.511308 master-0 kubenswrapper[7604]: I0309 16:25:48.511225 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerName="prober" Mar 09 16:25:48.511483 master-0 kubenswrapper[7604]: I0309 16:25:48.511312 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:25:48.511483 master-0 kubenswrapper[7604]: I0309 16:25:48.511325 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6b8350-34b6-4a0b-9027-3ea3c7e11d09" containerName="prober" Mar 09 16:25:48.511756 master-0 kubenswrapper[7604]: I0309 16:25:48.511731 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:25:48.514075 master-0 kubenswrapper[7604]: I0309 16:25:48.513952 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 09 16:25:48.515847 master-0 kubenswrapper[7604]: I0309 16:25:48.515783 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 09 16:25:48.527390 master-0 kubenswrapper[7604]: I0309 16:25:48.526228 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54"] Mar 09 16:25:48.591328 master-0 kubenswrapper[7604]: I0309 16:25:48.590857 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grmch\" (UniqueName: \"kubernetes.io/projected/f3033e86-fee0-45dc-ba74-d5448a777400-kube-api-access-grmch\") pod \"migrator-57ccdf9b5-4vd54\" (UID: \"f3033e86-fee0-45dc-ba74-d5448a777400\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:25:48.592050 master-0 kubenswrapper[7604]: I0309 16:25:48.591736 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-dswqr"] Mar 09 16:25:48.592648 master-0 kubenswrapper[7604]: I0309 16:25:48.592604 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.598803 master-0 kubenswrapper[7604]: I0309 16:25:48.595923 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:25:48.598803 master-0 kubenswrapper[7604]: I0309 16:25:48.596152 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:25:48.598803 master-0 kubenswrapper[7604]: I0309 16:25:48.596318 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:25:48.598803 master-0 kubenswrapper[7604]: I0309 16:25:48.596320 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:25:48.598803 master-0 kubenswrapper[7604]: I0309 16:25:48.596342 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:25:48.598803 master-0 kubenswrapper[7604]: I0309 16:25:48.597284 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:25:48.612987 master-0 kubenswrapper[7604]: I0309 16:25:48.612911 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-dswqr"] Mar 09 16:25:48.692056 master-0 kubenswrapper[7604]: I0309 16:25:48.691969 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grmch\" (UniqueName: \"kubernetes.io/projected/f3033e86-fee0-45dc-ba74-d5448a777400-kube-api-access-grmch\") pod \"migrator-57ccdf9b5-4vd54\" (UID: \"f3033e86-fee0-45dc-ba74-d5448a777400\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:25:48.692056 master-0 kubenswrapper[7604]: I0309 16:25:48.692039 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.692320 master-0 kubenswrapper[7604]: I0309 16:25:48.692127 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4pn8\" (UniqueName: \"kubernetes.io/projected/8b25d2d1-22b4-483b-bd2b-43a29030cb79-kube-api-access-g4pn8\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.692320 master-0 kubenswrapper[7604]: I0309 16:25:48.692144 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.692320 master-0 kubenswrapper[7604]: I0309 16:25:48.692176 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.692472 master-0 kubenswrapper[7604]: I0309 16:25:48.692339 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.713260 master-0 kubenswrapper[7604]: I0309 16:25:48.713192 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grmch\" (UniqueName: \"kubernetes.io/projected/f3033e86-fee0-45dc-ba74-d5448a777400-kube-api-access-grmch\") pod \"migrator-57ccdf9b5-4vd54\" (UID: \"f3033e86-fee0-45dc-ba74-d5448a777400\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:25:48.794056 master-0 kubenswrapper[7604]: I0309 16:25:48.793991 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.794259 master-0 kubenswrapper[7604]: E0309 16:25:48.794153 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 09 16:25:48.794259 master-0 kubenswrapper[7604]: E0309 16:25:48.794217 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:49.294202082 +0000 UTC m=+6.348171505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "openshift-global-ca" not found Mar 09 16:25:48.794328 master-0 kubenswrapper[7604]: I0309 16:25:48.794279 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4pn8\" (UniqueName: \"kubernetes.io/projected/8b25d2d1-22b4-483b-bd2b-43a29030cb79-kube-api-access-g4pn8\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.794328 master-0 kubenswrapper[7604]: I0309 16:25:48.794296 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.794328 master-0 kubenswrapper[7604]: I0309 16:25:48.794315 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.794455 master-0 kubenswrapper[7604]: I0309 16:25:48.794344 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.794455 master-0 kubenswrapper[7604]: E0309 16:25:48.794436 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 09 16:25:48.794455 master-0 kubenswrapper[7604]: E0309 16:25:48.794456 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:49.294449699 +0000 UTC m=+6.348419122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "config" not found Mar 09 16:25:48.794716 master-0 kubenswrapper[7604]: E0309 16:25:48.794649 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:48.794716 master-0 kubenswrapper[7604]: E0309 16:25:48.794692 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:48.794824 master-0 kubenswrapper[7604]: E0309 16:25:48.794696 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:49.294682995 +0000 UTC m=+6.348652428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : secret "serving-cert" not found Mar 09 16:25:48.794824 master-0 kubenswrapper[7604]: E0309 16:25:48.794816 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:49.294758257 +0000 UTC m=+6.348727690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "client-ca" not found Mar 09 16:25:48.812667 master-0 kubenswrapper[7604]: I0309 16:25:48.812618 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4pn8\" (UniqueName: \"kubernetes.io/projected/8b25d2d1-22b4-483b-bd2b-43a29030cb79-kube-api-access-g4pn8\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:48.828406 master-0 kubenswrapper[7604]: I0309 16:25:48.828354 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:25:48.986578 master-0 kubenswrapper[7604]: I0309 16:25:48.985821 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54"] Mar 09 16:25:48.994486 master-0 kubenswrapper[7604]: W0309 16:25:48.994391 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3033e86_fee0_45dc_ba74_d5448a777400.slice/crio-a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762 WatchSource:0}: Error finding container a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762: Status 404 returned error can't find the container with id a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762 Mar 09 16:25:49.269331 master-0 kubenswrapper[7604]: I0309 16:25:49.269254 7604 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="b70b23fca99483b0715615a35d01f07d100fae855b33a57805678b96e0a4e1a2" exitCode=0 Mar 09 16:25:49.270311 master-0 kubenswrapper[7604]: I0309 16:25:49.269370 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerDied","Data":"b70b23fca99483b0715615a35d01f07d100fae855b33a57805678b96e0a4e1a2"} Mar 09 16:25:49.270751 master-0 kubenswrapper[7604]: I0309 16:25:49.270634 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" event={"ID":"f3033e86-fee0-45dc-ba74-d5448a777400","Type":"ContainerStarted","Data":"a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762"} Mar 09 16:25:49.273135 master-0 kubenswrapper[7604]: I0309 16:25:49.273090 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-g4tdb" event={"ID":"709aad35-08ca-4ff5-abe5-e1558c8dc83f","Type":"ContainerStarted","Data":"c27d77c5f8d0e79a3fdcaf9e4476d4958cb1358b4bf4e7d91a6fac1f0cdc090c"} Mar 09 16:25:49.273210 master-0 kubenswrapper[7604]: I0309 16:25:49.273191 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:25:49.299674 master-0 kubenswrapper[7604]: I0309 16:25:49.299615 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:49.299886 master-0 kubenswrapper[7604]: E0309 16:25:49.299805 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 09 16:25:49.299927 master-0 kubenswrapper[7604]: E0309 16:25:49.299884 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.299860807 +0000 UTC m=+7.353830250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "openshift-global-ca" not found Mar 09 16:25:49.300073 master-0 kubenswrapper[7604]: I0309 16:25:49.300046 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:49.300213 master-0 kubenswrapper[7604]: E0309 16:25:49.300181 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:49.300269 master-0 kubenswrapper[7604]: E0309 16:25:49.300254 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.300234487 +0000 UTC m=+7.354203920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : secret "serving-cert" not found Mar 09 16:25:49.300323 master-0 kubenswrapper[7604]: I0309 16:25:49.300302 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:49.300404 master-0 kubenswrapper[7604]: I0309 16:25:49.300383 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:49.300568 master-0 kubenswrapper[7604]: E0309 16:25:49.300550 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 09 16:25:49.300597 master-0 kubenswrapper[7604]: E0309 16:25:49.300583 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.300574497 +0000 UTC m=+7.354543920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "config" not found Mar 09 16:25:49.300639 master-0 kubenswrapper[7604]: E0309 16:25:49.300625 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:49.300670 master-0 kubenswrapper[7604]: E0309 16:25:49.300651 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.300644039 +0000 UTC m=+7.354613572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "client-ca" not found Mar 09 16:25:49.504339 master-0 kubenswrapper[7604]: I0309 16:25:49.504057 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-dswqr"] Mar 09 16:25:49.508285 master-0 kubenswrapper[7604]: E0309 16:25:49.505270 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" podUID="8b25d2d1-22b4-483b-bd2b-43a29030cb79" Mar 09 16:25:49.513248 master-0 kubenswrapper[7604]: I0309 16:25:49.513183 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4"] Mar 09 16:25:49.513687 master-0 kubenswrapper[7604]: I0309 16:25:49.513665 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.516043 master-0 kubenswrapper[7604]: I0309 16:25:49.515692 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:25:49.519479 master-0 kubenswrapper[7604]: I0309 16:25:49.517645 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:25:49.519479 master-0 kubenswrapper[7604]: I0309 16:25:49.517693 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:25:49.519479 master-0 kubenswrapper[7604]: I0309 16:25:49.517644 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:25:49.519479 master-0 kubenswrapper[7604]: I0309 16:25:49.517645 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:25:49.526929 master-0 kubenswrapper[7604]: I0309 16:25:49.526885 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4"] Mar 09 16:25:49.605628 master-0 kubenswrapper[7604]: I0309 16:25:49.605540 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xnb\" (UniqueName: \"kubernetes.io/projected/1d6c7cfb-2226-427a-99ee-d31f14aa975f-kube-api-access-r7xnb\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.605628 master-0 kubenswrapper[7604]: I0309 16:25:49.605617 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.605898 master-0 kubenswrapper[7604]: I0309 16:25:49.605676 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.605898 master-0 kubenswrapper[7604]: I0309 16:25:49.605731 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-config\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.707158 master-0 kubenswrapper[7604]: I0309 16:25:49.707087 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.707391 master-0 kubenswrapper[7604]: I0309 16:25:49.707179 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-config\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.707391 master-0 kubenswrapper[7604]: I0309 16:25:49.707234 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7xnb\" (UniqueName: \"kubernetes.io/projected/1d6c7cfb-2226-427a-99ee-d31f14aa975f-kube-api-access-r7xnb\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.707391 master-0 kubenswrapper[7604]: I0309 16:25:49.707271 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.707571 master-0 kubenswrapper[7604]: E0309 16:25:49.707398 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:49.707571 master-0 kubenswrapper[7604]: E0309 16:25:49.707465 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.207448412 +0000 UTC m=+7.261417835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : secret "serving-cert" not found Mar 09 16:25:49.707732 master-0 kubenswrapper[7604]: E0309 16:25:49.707700 7604 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:49.707732 master-0 kubenswrapper[7604]: E0309 16:25:49.707728 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:50.207721579 +0000 UTC m=+7.261691002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : configmap "client-ca" not found Mar 09 16:25:49.708488 master-0 kubenswrapper[7604]: I0309 16:25:49.708463 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-config\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:49.730035 master-0 kubenswrapper[7604]: I0309 16:25:49.729663 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7xnb\" (UniqueName: \"kubernetes.io/projected/1d6c7cfb-2226-427a-99ee-d31f14aa975f-kube-api-access-r7xnb\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:50.216661 master-0 kubenswrapper[7604]: I0309 16:25:50.216539 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:50.216661 master-0 kubenswrapper[7604]: E0309 16:25:50.216682 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:50.217001 master-0 kubenswrapper[7604]: E0309 16:25:50.216741 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.216724799 +0000 UTC m=+8.270694222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : secret "serving-cert" not found Mar 09 16:25:50.217287 master-0 kubenswrapper[7604]: I0309 16:25:50.217147 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:50.217287 master-0 kubenswrapper[7604]: E0309 16:25:50.217244 7604 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:50.217287 master-0 kubenswrapper[7604]: E0309 16:25:50.217271 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:51.217263744 +0000 UTC m=+8.271233167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : configmap "client-ca" not found Mar 09 16:25:50.277150 master-0 kubenswrapper[7604]: I0309 16:25:50.277095 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.283280 master-0 kubenswrapper[7604]: I0309 16:25:50.283240 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.318301 master-0 kubenswrapper[7604]: I0309 16:25:50.318194 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.318655 master-0 kubenswrapper[7604]: I0309 16:25:50.318608 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.318736 master-0 kubenswrapper[7604]: I0309 16:25:50.318685 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.318777 master-0 kubenswrapper[7604]: I0309 16:25:50.318759 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.319146 master-0 kubenswrapper[7604]: E0309 16:25:50.318991 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:50.319146 master-0 kubenswrapper[7604]: E0309 16:25:50.319074 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:52.319056569 +0000 UTC m=+9.373025982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : configmap "client-ca" not found Mar 09 16:25:50.319354 master-0 kubenswrapper[7604]: E0309 16:25:50.319268 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:50.319354 master-0 kubenswrapper[7604]: E0309 16:25:50.319336 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert podName:8b25d2d1-22b4-483b-bd2b-43a29030cb79 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:52.319318427 +0000 UTC m=+9.373287840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert") pod "controller-manager-6f7fd6c796-dswqr" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79") : secret "serving-cert" not found Mar 09 16:25:50.319920 master-0 kubenswrapper[7604]: I0309 16:25:50.319669 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.320513 master-0 kubenswrapper[7604]: I0309 16:25:50.320488 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") pod \"controller-manager-6f7fd6c796-dswqr\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:50.414269 master-0 kubenswrapper[7604]: I0309 16:25:50.414220 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:50.418864 master-0 kubenswrapper[7604]: I0309 16:25:50.418815 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:50.419380 master-0 kubenswrapper[7604]: I0309 16:25:50.419326 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") pod \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " Mar 09 16:25:50.419380 master-0 kubenswrapper[7604]: I0309 16:25:50.419372 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4pn8\" (UniqueName: \"kubernetes.io/projected/8b25d2d1-22b4-483b-bd2b-43a29030cb79-kube-api-access-g4pn8\") pod \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " Mar 09 16:25:50.419522 master-0 kubenswrapper[7604]: I0309 16:25:50.419415 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") pod \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\" (UID: \"8b25d2d1-22b4-483b-bd2b-43a29030cb79\") " Mar 09 16:25:50.419877 master-0 kubenswrapper[7604]: I0309 16:25:50.419853 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config" (OuterVolumeSpecName: "config") pod "8b25d2d1-22b4-483b-bd2b-43a29030cb79" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:25:50.420007 master-0 kubenswrapper[7604]: I0309 16:25:50.419935 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b25d2d1-22b4-483b-bd2b-43a29030cb79" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:25:50.420007 master-0 kubenswrapper[7604]: I0309 16:25:50.419986 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:50.420007 master-0 kubenswrapper[7604]: I0309 16:25:50.420005 7604 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:50.422767 master-0 kubenswrapper[7604]: I0309 16:25:50.422705 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b25d2d1-22b4-483b-bd2b-43a29030cb79-kube-api-access-g4pn8" (OuterVolumeSpecName: "kube-api-access-g4pn8") pod "8b25d2d1-22b4-483b-bd2b-43a29030cb79" (UID: "8b25d2d1-22b4-483b-bd2b-43a29030cb79"). InnerVolumeSpecName "kube-api-access-g4pn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:25:50.521508 master-0 kubenswrapper[7604]: I0309 16:25:50.521389 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4pn8\" (UniqueName: \"kubernetes.io/projected/8b25d2d1-22b4-483b-bd2b-43a29030cb79-kube-api-access-g4pn8\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:51.228416 master-0 kubenswrapper[7604]: I0309 16:25:51.227958 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:51.228698 master-0 kubenswrapper[7604]: E0309 16:25:51.228147 7604 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:51.228698 master-0 kubenswrapper[7604]: E0309 16:25:51.228625 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:53.228586266 +0000 UTC m=+10.282555739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : configmap "client-ca" not found Mar 09 16:25:51.228809 master-0 kubenswrapper[7604]: I0309 16:25:51.228783 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:51.228918 master-0 kubenswrapper[7604]: E0309 16:25:51.228878 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:51.228972 master-0 kubenswrapper[7604]: E0309 16:25:51.228959 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:53.228938645 +0000 UTC m=+10.282908068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : secret "serving-cert" not found Mar 09 16:25:51.282212 master-0 kubenswrapper[7604]: I0309 16:25:51.282078 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" event={"ID":"f3033e86-fee0-45dc-ba74-d5448a777400","Type":"ContainerStarted","Data":"564fb2a79a952ea5ed5e46ac80649fe6e98102af37a9ee181baf841061bad1da"} Mar 09 16:25:51.282212 master-0 kubenswrapper[7604]: I0309 16:25:51.282148 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" event={"ID":"f3033e86-fee0-45dc-ba74-d5448a777400","Type":"ContainerStarted","Data":"9020c2877d01bf1d75b9b414257663f13cb08b4ffdfe4fe63c2e6c4af71478ce"} Mar 09 16:25:51.283192 master-0 kubenswrapper[7604]: I0309 16:25:51.282311 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-dswqr" Mar 09 16:25:51.289293 master-0 kubenswrapper[7604]: I0309 16:25:51.289069 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:25:51.298061 master-0 kubenswrapper[7604]: I0309 16:25:51.297968 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" podStartSLOduration=1.9570379930000001 podStartE2EDuration="3.297945951s" podCreationTimestamp="2026-03-09 16:25:48 +0000 UTC" firstStartedPulling="2026-03-09 16:25:48.996542708 +0000 UTC m=+6.050512131" lastFinishedPulling="2026-03-09 16:25:50.337450666 +0000 UTC m=+7.391420089" observedRunningTime="2026-03-09 16:25:51.297731465 +0000 UTC m=+8.351700918" watchObservedRunningTime="2026-03-09 16:25:51.297945951 +0000 UTC m=+8.351915384" Mar 09 16:25:51.331777 master-0 kubenswrapper[7604]: I0309 16:25:51.331728 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-dswqr"] Mar 09 16:25:51.343478 master-0 kubenswrapper[7604]: I0309 16:25:51.343021 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-dswqr"] Mar 09 16:25:51.435250 master-0 kubenswrapper[7604]: I0309 16:25:51.435204 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b25d2d1-22b4-483b-bd2b-43a29030cb79-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:51.435250 master-0 kubenswrapper[7604]: I0309 16:25:51.435246 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b25d2d1-22b4-483b-bd2b-43a29030cb79-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:25:51.841217 master-0 kubenswrapper[7604]: I0309 16:25:51.841154 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:51.841408 master-0 kubenswrapper[7604]: I0309 16:25:51.841241 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:51.841408 master-0 kubenswrapper[7604]: E0309 16:25:51.841348 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:51.841408 master-0 kubenswrapper[7604]: E0309 16:25:51.841368 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:51.841537 master-0 kubenswrapper[7604]: E0309 16:25:51.841443 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.841404927 +0000 UTC m=+16.895374400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:51.841633 master-0 kubenswrapper[7604]: I0309 16:25:51.841586 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:51.841667 master-0 kubenswrapper[7604]: E0309 16:25:51.841641 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.841615033 +0000 UTC m=+16.895584496 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:51.841698 master-0 kubenswrapper[7604]: E0309 16:25:51.841684 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:51.841698 master-0 kubenswrapper[7604]: I0309 16:25:51.841685 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:51.841755 master-0 kubenswrapper[7604]: E0309 16:25:51.841722 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.841706206 +0000 UTC m=+16.895675639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:51.841755 master-0 kubenswrapper[7604]: I0309 16:25:51.841744 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:51.841811 master-0 kubenswrapper[7604]: I0309 16:25:51.841771 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:51.841811 master-0 kubenswrapper[7604]: E0309 16:25:51.841800 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:51.841867 master-0 kubenswrapper[7604]: E0309 16:25:51.841849 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.84183945 +0000 UTC m=+16.895808943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:51.841867 master-0 kubenswrapper[7604]: E0309 16:25:51.841861 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:51.841947 master-0 kubenswrapper[7604]: E0309 16:25:51.841899 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.841888771 +0000 UTC m=+16.895858414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:51.841947 master-0 kubenswrapper[7604]: E0309 16:25:51.841899 7604 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:51.841947 master-0 kubenswrapper[7604]: E0309 16:25:51.841930 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.841921512 +0000 UTC m=+16.895890945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:51.942458 master-0 kubenswrapper[7604]: I0309 16:25:51.942368 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:51.942458 master-0 kubenswrapper[7604]: I0309 16:25:51.942463 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: I0309 16:25:51.942506 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: I0309 16:25:51.942534 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: E0309 16:25:51.942575 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: E0309 16:25:51.942659 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.942639697 +0000 UTC m=+16.996609110 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: E0309 16:25:51.942720 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: E0309 16:25:51.942777 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: E0309 16:25:51.942801 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.942792992 +0000 UTC m=+16.996762415 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:51.942821 master-0 kubenswrapper[7604]: E0309 16:25:51.942835 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.942811132 +0000 UTC m=+16.996780625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.942838 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: I0309 16:25:51.942865 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.942881 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.942872004 +0000 UTC m=+16.996841537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: I0309 16:25:51.942909 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: I0309 16:25:51.943005 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.943024 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.943064 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: I0309 16:25:51.943031 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.943106 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.943064 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.943052079 +0000 UTC m=+16.997021602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.943131 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.943125201 +0000 UTC m=+16.997094624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: E0309 16:25:51.943141 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.943135251 +0000 UTC m=+16.997104674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:51.943192 master-0 kubenswrapper[7604]: I0309 16:25:51.943164 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:51.943699 master-0 kubenswrapper[7604]: E0309 16:25:51.943211 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:51.943699 master-0 kubenswrapper[7604]: E0309 16:25:51.943250 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:51.943699 master-0 kubenswrapper[7604]: E0309 16:25:51.943272 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.943266265 +0000 UTC m=+16.997235678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:51.943699 master-0 kubenswrapper[7604]: E0309 16:25:51.943296 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.943277825 +0000 UTC m=+16.997247248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:52.286998 master-0 kubenswrapper[7604]: I0309 16:25:52.286895 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerStarted","Data":"20c3af1506f68ad55d72af72ba11892a7b1fbea246aad319e67c6ab36a77fae2"} Mar 09 16:25:52.629861 master-0 kubenswrapper[7604]: I0309 16:25:52.629738 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79b7b5f969-flxtt"] Mar 09 16:25:52.630327 master-0 kubenswrapper[7604]: I0309 16:25:52.630309 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.634676 master-0 kubenswrapper[7604]: I0309 16:25:52.632641 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:25:52.634676 master-0 kubenswrapper[7604]: I0309 16:25:52.632644 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:25:52.634676 master-0 kubenswrapper[7604]: I0309 16:25:52.634361 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:25:52.636967 master-0 kubenswrapper[7604]: I0309 16:25:52.635802 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:25:52.640347 master-0 kubenswrapper[7604]: I0309 16:25:52.640260 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:25:52.643523 master-0 kubenswrapper[7604]: I0309 16:25:52.643494 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:25:52.651436 master-0 kubenswrapper[7604]: I0309 16:25:52.650521 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b7b5f969-flxtt"] Mar 09 16:25:52.752731 master-0 kubenswrapper[7604]: I0309 16:25:52.752474 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.752731 master-0 kubenswrapper[7604]: I0309 16:25:52.752552 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-proxy-ca-bundles\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.752731 master-0 kubenswrapper[7604]: I0309 16:25:52.752644 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmj8\" (UniqueName: \"kubernetes.io/projected/5821d180-5114-4f2a-93f3-02922538bef6-kube-api-access-xfmj8\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.752731 master-0 kubenswrapper[7604]: I0309 16:25:52.752697 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.753034 master-0 kubenswrapper[7604]: I0309 16:25:52.752806 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-config\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.853407 master-0 kubenswrapper[7604]: I0309 16:25:52.853353 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.853407 master-0 kubenswrapper[7604]: I0309 16:25:52.853406 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-proxy-ca-bundles\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.853667 master-0 kubenswrapper[7604]: I0309 16:25:52.853465 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfmj8\" (UniqueName: \"kubernetes.io/projected/5821d180-5114-4f2a-93f3-02922538bef6-kube-api-access-xfmj8\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.853667 master-0 kubenswrapper[7604]: E0309 16:25:52.853564 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:52.853747 master-0 kubenswrapper[7604]: E0309 16:25:52.853691 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:53.353666715 +0000 UTC m=+10.407636198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : configmap "client-ca" not found Mar 09 16:25:52.853825 master-0 kubenswrapper[7604]: I0309 16:25:52.853758 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.853948 master-0 kubenswrapper[7604]: E0309 16:25:52.853909 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:52.854028 master-0 kubenswrapper[7604]: E0309 16:25:52.853955 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:53.353946923 +0000 UTC m=+10.407916346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : secret "serving-cert" not found Mar 09 16:25:52.854028 master-0 kubenswrapper[7604]: I0309 16:25:52.853999 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-config\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.855027 master-0 kubenswrapper[7604]: I0309 16:25:52.855003 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-config\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.855365 master-0 kubenswrapper[7604]: I0309 16:25:52.855332 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-proxy-ca-bundles\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:52.874769 master-0 kubenswrapper[7604]: I0309 16:25:52.874622 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfmj8\" (UniqueName: \"kubernetes.io/projected/5821d180-5114-4f2a-93f3-02922538bef6-kube-api-access-xfmj8\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:53.118290 master-0 kubenswrapper[7604]: I0309 16:25:53.118230 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b25d2d1-22b4-483b-bd2b-43a29030cb79" path="/var/lib/kubelet/pods/8b25d2d1-22b4-483b-bd2b-43a29030cb79/volumes" Mar 09 16:25:53.260770 master-0 kubenswrapper[7604]: I0309 16:25:53.260702 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:53.262146 master-0 kubenswrapper[7604]: E0309 16:25:53.261162 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:53.262146 master-0 kubenswrapper[7604]: E0309 16:25:53.261266 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:57.261241628 +0000 UTC m=+14.315211071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : secret "serving-cert" not found Mar 09 16:25:53.262146 master-0 kubenswrapper[7604]: I0309 16:25:53.261877 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:53.262146 master-0 kubenswrapper[7604]: E0309 16:25:53.262080 7604 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:53.262146 master-0 kubenswrapper[7604]: E0309 16:25:53.262121 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:25:57.262108443 +0000 UTC m=+14.316077876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : configmap "client-ca" not found Mar 09 16:25:53.363043 master-0 kubenswrapper[7604]: I0309 16:25:53.362606 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:53.363609 master-0 kubenswrapper[7604]: I0309 16:25:53.363169 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:53.363609 master-0 kubenswrapper[7604]: E0309 16:25:53.362781 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:53.363609 master-0 kubenswrapper[7604]: E0309 16:25:53.363379 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:54.363360084 +0000 UTC m=+11.417329507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : secret "serving-cert" not found Mar 09 16:25:53.363609 master-0 kubenswrapper[7604]: E0309 16:25:53.363315 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:53.363609 master-0 kubenswrapper[7604]: E0309 16:25:53.363437 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:54.363410995 +0000 UTC m=+11.417380418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : configmap "client-ca" not found Mar 09 16:25:54.374940 master-0 kubenswrapper[7604]: I0309 16:25:54.374887 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:54.375714 master-0 kubenswrapper[7604]: E0309 16:25:54.375178 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:54.375714 master-0 kubenswrapper[7604]: E0309 16:25:54.375274 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:56.375248381 +0000 UTC m=+13.429217834 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : secret "serving-cert" not found Mar 09 16:25:54.376045 master-0 kubenswrapper[7604]: I0309 16:25:54.376005 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:54.376222 master-0 kubenswrapper[7604]: E0309 16:25:54.376183 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:54.376300 master-0 kubenswrapper[7604]: E0309 16:25:54.376283 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:56.376257059 +0000 UTC m=+13.430226492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : configmap "client-ca" not found Mar 09 16:25:56.400118 master-0 kubenswrapper[7604]: I0309 16:25:56.399771 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:56.400118 master-0 kubenswrapper[7604]: I0309 16:25:56.399846 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:25:56.400995 master-0 kubenswrapper[7604]: E0309 16:25:56.400103 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:56.400995 master-0 kubenswrapper[7604]: E0309 16:25:56.400248 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:56.400995 master-0 kubenswrapper[7604]: E0309 16:25:56.400273 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:00.400238901 +0000 UTC m=+17.454208514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : configmap "client-ca" not found Mar 09 16:25:56.400995 master-0 kubenswrapper[7604]: E0309 16:25:56.400316 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:00.400297692 +0000 UTC m=+17.454267115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : secret "serving-cert" not found Mar 09 16:25:57.311863 master-0 kubenswrapper[7604]: I0309 16:25:57.311516 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:57.312173 master-0 kubenswrapper[7604]: E0309 16:25:57.311668 7604 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:25:57.312173 master-0 kubenswrapper[7604]: I0309 16:25:57.311983 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:25:57.312173 master-0 kubenswrapper[7604]: E0309 16:25:57.311998 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:26:05.311977858 +0000 UTC m=+22.365947281 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : configmap "client-ca" not found Mar 09 16:25:57.312173 master-0 kubenswrapper[7604]: E0309 16:25:57.312120 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:25:57.312346 master-0 kubenswrapper[7604]: E0309 16:25:57.312208 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:26:05.312184244 +0000 UTC m=+22.366153717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : secret "serving-cert" not found Mar 09 16:25:58.595480 master-0 kubenswrapper[7604]: I0309 16:25:58.595402 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5ddd5549bd-4wtqd"] Mar 09 16:25:58.596322 master-0 kubenswrapper[7604]: I0309 16:25:58.596293 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.599005 master-0 kubenswrapper[7604]: I0309 16:25:58.598969 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 09 16:25:58.599241 master-0 kubenswrapper[7604]: I0309 16:25:58.599221 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 09 16:25:58.599573 master-0 kubenswrapper[7604]: I0309 16:25:58.599488 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 09 16:25:58.602895 master-0 kubenswrapper[7604]: I0309 16:25:58.602856 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 09 16:25:58.603052 master-0 kubenswrapper[7604]: I0309 16:25:58.602850 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 09 16:25:58.603391 master-0 kubenswrapper[7604]: I0309 16:25:58.603137 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 09 16:25:58.603391 master-0 kubenswrapper[7604]: I0309 16:25:58.603211 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 09 16:25:58.603391 master-0 kubenswrapper[7604]: I0309 16:25:58.603311 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 09 16:25:58.603622 master-0 kubenswrapper[7604]: I0309 16:25:58.603607 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 09 16:25:58.606794 master-0 kubenswrapper[7604]: I0309 16:25:58.606744 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 09 16:25:58.613331 master-0 kubenswrapper[7604]: I0309 16:25:58.613293 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5ddd5549bd-4wtqd"] Mar 09 16:25:58.632497 master-0 kubenswrapper[7604]: I0309 16:25:58.632470 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.632715 master-0 kubenswrapper[7604]: I0309 16:25:58.632699 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-node-pullsecrets\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.632832 master-0 kubenswrapper[7604]: I0309 16:25:58.632818 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.632911 master-0 kubenswrapper[7604]: I0309 16:25:58.632898 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-audit-dir\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.633068 master-0 kubenswrapper[7604]: I0309 16:25:58.633044 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-etcd-serving-ca\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.633496 master-0 kubenswrapper[7604]: I0309 16:25:58.633473 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkcq\" (UniqueName: \"kubernetes.io/projected/3d071869-9372-4576-947f-520f9191abe3-kube-api-access-bkkcq\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.633686 master-0 kubenswrapper[7604]: I0309 16:25:58.633673 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-image-import-ca\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.633816 master-0 kubenswrapper[7604]: I0309 16:25:58.633797 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.633916 master-0 kubenswrapper[7604]: I0309 16:25:58.633902 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-config\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.634025 master-0 kubenswrapper[7604]: I0309 16:25:58.634011 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-trusted-ca-bundle\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.634098 master-0 kubenswrapper[7604]: I0309 16:25:58.634087 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-encryption-config\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.734751 master-0 kubenswrapper[7604]: I0309 16:25:58.734704 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-image-import-ca\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.734751 master-0 kubenswrapper[7604]: I0309 16:25:58.734744 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735020 master-0 kubenswrapper[7604]: I0309 16:25:58.734764 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-config\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735020 master-0 kubenswrapper[7604]: I0309 16:25:58.734782 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-encryption-config\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735020 master-0 kubenswrapper[7604]: E0309 16:25:58.734931 7604 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 09 16:25:58.735151 master-0 kubenswrapper[7604]: E0309 16:25:58.735022 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.23499691 +0000 UTC m=+16.288966343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "etcd-client" not found Mar 09 16:25:58.735303 master-0 kubenswrapper[7604]: I0309 16:25:58.735279 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-trusted-ca-bundle\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735471 master-0 kubenswrapper[7604]: I0309 16:25:58.735311 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-image-import-ca\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735620 master-0 kubenswrapper[7604]: I0309 16:25:58.735601 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735821 master-0 kubenswrapper[7604]: I0309 16:25:58.735802 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-node-pullsecrets\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.735975 master-0 kubenswrapper[7604]: E0309 16:25:58.735918 7604 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:25:58.736062 master-0 kubenswrapper[7604]: I0309 16:25:58.735896 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-node-pullsecrets\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.736062 master-0 kubenswrapper[7604]: I0309 16:25:58.736043 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-trusted-ca-bundle\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.736062 master-0 kubenswrapper[7604]: I0309 16:25:58.735931 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-config\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.736253 master-0 kubenswrapper[7604]: E0309 16:25:58.736066 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.23602582 +0000 UTC m=+16.289995443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "serving-cert" not found Mar 09 16:25:58.736636 master-0 kubenswrapper[7604]: E0309 16:25:58.736519 7604 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 09 16:25:58.736636 master-0 kubenswrapper[7604]: E0309 16:25:58.736593 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:25:59.236569785 +0000 UTC m=+16.290539218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : configmap "audit-0" not found Mar 09 16:25:58.736797 master-0 kubenswrapper[7604]: I0309 16:25:58.736407 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.736955 master-0 kubenswrapper[7604]: I0309 16:25:58.736932 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-audit-dir\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.737136 master-0 kubenswrapper[7604]: I0309 16:25:58.737113 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-etcd-serving-ca\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.737365 master-0 kubenswrapper[7604]: I0309 16:25:58.737047 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-audit-dir\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.737519 master-0 kubenswrapper[7604]: I0309 16:25:58.737499 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkcq\" (UniqueName: \"kubernetes.io/projected/3d071869-9372-4576-947f-520f9191abe3-kube-api-access-bkkcq\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.737713 master-0 kubenswrapper[7604]: I0309 16:25:58.737661 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-etcd-serving-ca\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.740399 master-0 kubenswrapper[7604]: I0309 16:25:58.740356 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-encryption-config\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:58.753415 master-0 kubenswrapper[7604]: I0309 16:25:58.753381 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkcq\" (UniqueName: \"kubernetes.io/projected/3d071869-9372-4576-947f-520f9191abe3-kube-api-access-bkkcq\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:59.244117 master-0 kubenswrapper[7604]: I0309 16:25:59.244040 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:59.244117 master-0 kubenswrapper[7604]: I0309 16:25:59.244122 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:59.244411 master-0 kubenswrapper[7604]: E0309 16:25:59.244276 7604 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 09 16:25:59.244411 master-0 kubenswrapper[7604]: E0309 16:25:59.244377 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:00.24435435 +0000 UTC m=+17.298323843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "etcd-client" not found Mar 09 16:25:59.244773 master-0 kubenswrapper[7604]: E0309 16:25:59.244734 7604 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:25:59.244831 master-0 kubenswrapper[7604]: E0309 16:25:59.244814 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:00.244792742 +0000 UTC m=+17.298762175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "serving-cert" not found Mar 09 16:25:59.244891 master-0 kubenswrapper[7604]: I0309 16:25:59.244859 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:25:59.248562 master-0 kubenswrapper[7604]: E0309 16:25:59.248521 7604 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 09 16:25:59.248706 master-0 kubenswrapper[7604]: E0309 16:25:59.248609 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:00.248583979 +0000 UTC m=+17.302553402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : configmap "audit-0" not found Mar 09 16:25:59.310341 master-0 kubenswrapper[7604]: I0309 16:25:59.310295 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" event={"ID":"a62ba179-443d-424f-8cff-c75677e8cd5c","Type":"ContainerStarted","Data":"556fa937e7c3581b8c9b14e4926a7f4f60005bc952c23b42c146238b8e0e37d0"} Mar 09 16:25:59.853210 master-0 kubenswrapper[7604]: I0309 16:25:59.852789 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:25:59.853210 master-0 kubenswrapper[7604]: I0309 16:25:59.853182 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853024 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853318 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.853291992 +0000 UTC m=+32.907261475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: I0309 16:25:59.853242 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853340 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853477 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.853467627 +0000 UTC m=+32.907437160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853377 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: I0309 16:25:59.853482 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853515 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.853507889 +0000 UTC m=+32.907477442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "performance-addon-operator-webhook-cert" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: I0309 16:25:59.853546 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: I0309 16:25:59.853591 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853621 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853695 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.853665674 +0000 UTC m=+32.907635167 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853713 7604 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853738 7604 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853778 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert podName:77a20946-c236-417e-8333-6d1aac88bbc2 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.853758166 +0000 UTC m=+32.907727659 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert") pod "cluster-version-operator-745944c6b7-pwnsk" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2") : secret "cluster-version-operator-serving-cert" not found Mar 09 16:25:59.853924 master-0 kubenswrapper[7604]: E0309 16:25:59.853797 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls podName:dc732d23-37bc-41c2-9f9b-333ba517c1f8 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.853789207 +0000 UTC m=+32.907758730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-gglsc" (UID: "dc732d23-37bc-41c2-9f9b-333ba517c1f8") : secret "node-tuning-operator-tls" not found Mar 09 16:25:59.868791 master-0 kubenswrapper[7604]: I0309 16:25:59.867562 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m"] Mar 09 16:25:59.868791 master-0 kubenswrapper[7604]: I0309 16:25:59.868337 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:25:59.877743 master-0 kubenswrapper[7604]: I0309 16:25:59.877679 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m"] Mar 09 16:25:59.955034 master-0 kubenswrapper[7604]: I0309 16:25:59.954988 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:25:59.955379 master-0 kubenswrapper[7604]: I0309 16:25:59.955358 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkqg\" (UniqueName: \"kubernetes.io/projected/57036838-9f42-4ea1-a5c9-77f820cc22c9-kube-api-access-czkqg\") pod \"csi-snapshot-controller-7577d6f48-f594m\" (UID: \"57036838-9f42-4ea1-a5c9-77f820cc22c9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:25:59.955545 master-0 kubenswrapper[7604]: I0309 16:25:59.955526 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:25:59.955680 master-0 kubenswrapper[7604]: I0309 16:25:59.955661 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:59.955787 master-0 kubenswrapper[7604]: I0309 16:25:59.955767 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:25:59.955893 master-0 kubenswrapper[7604]: I0309 16:25:59.955875 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:25:59.956017 master-0 kubenswrapper[7604]: I0309 16:25:59.955999 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:25:59.956150 master-0 kubenswrapper[7604]: I0309 16:25:59.956133 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:25:59.956253 master-0 kubenswrapper[7604]: I0309 16:25:59.956236 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:25:59.956343 master-0 kubenswrapper[7604]: E0309 16:25:59.955210 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:59.956487 master-0 kubenswrapper[7604]: E0309 16:25:59.955591 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:25:59.956546 master-0 kubenswrapper[7604]: E0309 16:25:59.956476 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:25:59.956546 master-0 kubenswrapper[7604]: E0309 16:25:59.955730 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:59.956546 master-0 kubenswrapper[7604]: E0309 16:25:59.955857 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:25:59.956546 master-0 kubenswrapper[7604]: E0309 16:25:59.955948 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:25:59.956546 master-0 kubenswrapper[7604]: E0309 16:25:59.956063 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:59.956736 master-0 kubenswrapper[7604]: E0309 16:25:59.956208 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:25:59.956736 master-0 kubenswrapper[7604]: E0309 16:25:59.956566 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:25:59.956736 master-0 kubenswrapper[7604]: I0309 16:25:59.956466 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:25:59.956878 master-0 kubenswrapper[7604]: E0309 16:25:59.956863 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.956458747 +0000 UTC m=+33.010428210 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:25:59.956968 master-0 kubenswrapper[7604]: E0309 16:25:59.956957 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.95694457 +0000 UTC m=+33.010914043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:25:59.957064 master-0 kubenswrapper[7604]: E0309 16:25:59.957051 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.957040663 +0000 UTC m=+33.011010146 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:25:59.957154 master-0 kubenswrapper[7604]: E0309 16:25:59.957142 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.957130545 +0000 UTC m=+33.011099978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:25:59.957251 master-0 kubenswrapper[7604]: E0309 16:25:59.957238 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.957225119 +0000 UTC m=+33.011194542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:25:59.957347 master-0 kubenswrapper[7604]: E0309 16:25:59.957332 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.957322072 +0000 UTC m=+33.011291495 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:25:59.957458 master-0 kubenswrapper[7604]: E0309 16:25:59.957429 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.957415694 +0000 UTC m=+33.011385117 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:25:59.957653 master-0 kubenswrapper[7604]: E0309 16:25:59.957638 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.95762704 +0000 UTC m=+33.011596473 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:25:59.957750 master-0 kubenswrapper[7604]: E0309 16:25:59.957738 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.957728463 +0000 UTC m=+33.011697886 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:26:00.058628 master-0 kubenswrapper[7604]: I0309 16:26:00.058548 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkqg\" (UniqueName: \"kubernetes.io/projected/57036838-9f42-4ea1-a5c9-77f820cc22c9-kube-api-access-czkqg\") pod \"csi-snapshot-controller-7577d6f48-f594m\" (UID: \"57036838-9f42-4ea1-a5c9-77f820cc22c9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:26:00.079106 master-0 kubenswrapper[7604]: I0309 16:26:00.079019 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkqg\" (UniqueName: \"kubernetes.io/projected/57036838-9f42-4ea1-a5c9-77f820cc22c9-kube-api-access-czkqg\") pod \"csi-snapshot-controller-7577d6f48-f594m\" (UID: \"57036838-9f42-4ea1-a5c9-77f820cc22c9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:26:00.187722 master-0 kubenswrapper[7604]: I0309 16:26:00.187651 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:26:00.261173 master-0 kubenswrapper[7604]: I0309 16:26:00.261121 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:00.261292 master-0 kubenswrapper[7604]: I0309 16:26:00.261191 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:00.261627 master-0 kubenswrapper[7604]: I0309 16:26:00.261598 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:00.261727 master-0 kubenswrapper[7604]: E0309 16:26:00.261693 7604 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:00.261810 master-0 kubenswrapper[7604]: E0309 16:26:00.261795 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:02.261768702 +0000 UTC m=+19.315738205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "serving-cert" not found Mar 09 16:26:00.261881 master-0 kubenswrapper[7604]: E0309 16:26:00.261842 7604 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 09 16:26:00.261913 master-0 kubenswrapper[7604]: E0309 16:26:00.261899 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:02.261881345 +0000 UTC m=+19.315850768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : configmap "audit-0" not found Mar 09 16:26:00.270016 master-0 kubenswrapper[7604]: I0309 16:26:00.269975 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:00.316639 master-0 kubenswrapper[7604]: I0309 16:26:00.315950 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" event={"ID":"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a","Type":"ContainerStarted","Data":"ed8140bb922b35373782d1b39705b1d6200c0f0fb01785807a86c3fad481d2c8"} Mar 09 16:26:00.370908 master-0 kubenswrapper[7604]: I0309 16:26:00.370388 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m"] Mar 09 16:26:00.468108 master-0 kubenswrapper[7604]: I0309 16:26:00.468035 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:00.468341 master-0 kubenswrapper[7604]: E0309 16:26:00.468184 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:26:00.468341 master-0 kubenswrapper[7604]: E0309 16:26:00.468271 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:08.468249435 +0000 UTC m=+25.522218908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : configmap "client-ca" not found Mar 09 16:26:00.468536 master-0 kubenswrapper[7604]: I0309 16:26:00.468508 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:00.468726 master-0 kubenswrapper[7604]: E0309 16:26:00.468697 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:00.468787 master-0 kubenswrapper[7604]: E0309 16:26:00.468781 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:08.468758709 +0000 UTC m=+25.522728212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : secret "serving-cert" not found Mar 09 16:26:01.320309 master-0 kubenswrapper[7604]: I0309 16:26:01.320241 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerStarted","Data":"9851e44d22a4912195681afea0e67c8f9b72db3658de58af22ee3dada2512884"} Mar 09 16:26:02.300719 master-0 kubenswrapper[7604]: I0309 16:26:02.300487 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:02.300719 master-0 kubenswrapper[7604]: I0309 16:26:02.300622 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:02.301043 master-0 kubenswrapper[7604]: E0309 16:26:02.300825 7604 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:02.301043 master-0 kubenswrapper[7604]: E0309 16:26:02.300948 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:06.300921739 +0000 UTC m=+23.354891232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "serving-cert" not found Mar 09 16:26:02.301043 master-0 kubenswrapper[7604]: E0309 16:26:02.300985 7604 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 09 16:26:02.301204 master-0 kubenswrapper[7604]: E0309 16:26:02.301054 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:06.301035872 +0000 UTC m=+23.355005385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : configmap "audit-0" not found Mar 09 16:26:04.143038 master-0 kubenswrapper[7604]: I0309 16:26:04.142462 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:26:04.143822 master-0 kubenswrapper[7604]: I0309 16:26:04.143204 7604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:26:04.160096 master-0 kubenswrapper[7604]: I0309 16:26:04.160031 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:26:05.360955 master-0 kubenswrapper[7604]: I0309 16:26:05.360860 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:26:05.361684 master-0 kubenswrapper[7604]: E0309 16:26:05.361624 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:05.361840 master-0 kubenswrapper[7604]: I0309 16:26:05.361800 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") pod \"route-controller-manager-65959ff4c9-fh2s4\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:26:05.361896 master-0 kubenswrapper[7604]: E0309 16:26:05.361837 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:26:21.361801571 +0000 UTC m=+38.415771014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : secret "serving-cert" not found Mar 09 16:26:05.361944 master-0 kubenswrapper[7604]: E0309 16:26:05.361904 7604 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:26:05.361998 master-0 kubenswrapper[7604]: E0309 16:26:05.361977 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca podName:1d6c7cfb-2226-427a-99ee-d31f14aa975f nodeName:}" failed. No retries permitted until 2026-03-09 16:26:21.361955685 +0000 UTC m=+38.415925168 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca") pod "route-controller-manager-65959ff4c9-fh2s4" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f") : configmap "client-ca" not found Mar 09 16:26:06.381194 master-0 kubenswrapper[7604]: I0309 16:26:06.381100 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:06.382105 master-0 kubenswrapper[7604]: E0309 16:26:06.381345 7604 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 09 16:26:06.382105 master-0 kubenswrapper[7604]: E0309 16:26:06.381550 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:14.381512348 +0000 UTC m=+31.435481961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : configmap "audit-0" not found Mar 09 16:26:06.382105 master-0 kubenswrapper[7604]: I0309 16:26:06.381873 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") pod \"apiserver-5ddd5549bd-4wtqd\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:06.382271 master-0 kubenswrapper[7604]: E0309 16:26:06.382116 7604 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:06.382271 master-0 kubenswrapper[7604]: E0309 16:26:06.382259 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert podName:3d071869-9372-4576-947f-520f9191abe3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:14.382231797 +0000 UTC m=+31.436201400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert") pod "apiserver-5ddd5549bd-4wtqd" (UID: "3d071869-9372-4576-947f-520f9191abe3") : secret "serving-cert" not found Mar 09 16:26:08.510106 master-0 kubenswrapper[7604]: I0309 16:26:08.510041 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:08.510954 master-0 kubenswrapper[7604]: E0309 16:26:08.510240 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:08.510954 master-0 kubenswrapper[7604]: E0309 16:26:08.510319 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:24.510299869 +0000 UTC m=+41.564269302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : secret "serving-cert" not found Mar 09 16:26:08.510954 master-0 kubenswrapper[7604]: I0309 16:26:08.510459 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") pod \"controller-manager-79b7b5f969-flxtt\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:08.510954 master-0 kubenswrapper[7604]: E0309 16:26:08.510533 7604 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 09 16:26:08.510954 master-0 kubenswrapper[7604]: E0309 16:26:08.510567 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca podName:5821d180-5114-4f2a-93f3-02922538bef6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:24.510557286 +0000 UTC m=+41.564526709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca") pod "controller-manager-79b7b5f969-flxtt" (UID: "5821d180-5114-4f2a-93f3-02922538bef6") : configmap "client-ca" not found Mar 09 16:26:09.483726 master-0 kubenswrapper[7604]: I0309 16:26:09.483673 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 09 16:26:09.484409 master-0 kubenswrapper[7604]: I0309 16:26:09.484375 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.489256 master-0 kubenswrapper[7604]: I0309 16:26:09.489206 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 09 16:26:09.521706 master-0 kubenswrapper[7604]: I0309 16:26:09.521638 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 09 16:26:09.554468 master-0 kubenswrapper[7604]: I0309 16:26:09.554389 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb"] Mar 09 16:26:09.555056 master-0 kubenswrapper[7604]: I0309 16:26:09.555030 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.561253 master-0 kubenswrapper[7604]: I0309 16:26:09.561186 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 09 16:26:09.561724 master-0 kubenswrapper[7604]: I0309 16:26:09.561698 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 09 16:26:09.567150 master-0 kubenswrapper[7604]: I0309 16:26:09.567070 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 09 16:26:09.605194 master-0 kubenswrapper[7604]: I0309 16:26:09.605140 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb"] Mar 09 16:26:09.633823 master-0 kubenswrapper[7604]: I0309 16:26:09.633777 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.633916 master-0 kubenswrapper[7604]: I0309 16:26:09.633858 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.633916 master-0 kubenswrapper[7604]: I0309 16:26:09.633900 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.634070 master-0 kubenswrapper[7604]: I0309 16:26:09.634039 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-var-lock\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.634117 master-0 kubenswrapper[7604]: I0309 16:26:09.634076 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.634172 master-0 kubenswrapper[7604]: I0309 16:26:09.634133 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jvl\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-kube-api-access-h8jvl\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.634215 master-0 kubenswrapper[7604]: I0309 16:26:09.634177 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.634215 master-0 kubenswrapper[7604]: I0309 16:26:09.634206 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.635298 master-0 kubenswrapper[7604]: I0309 16:26:09.634480 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml"] Mar 09 16:26:09.635298 master-0 kubenswrapper[7604]: I0309 16:26:09.635120 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.644456 master-0 kubenswrapper[7604]: I0309 16:26:09.640021 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 09 16:26:09.644456 master-0 kubenswrapper[7604]: I0309 16:26:09.642703 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 09 16:26:09.652690 master-0 kubenswrapper[7604]: I0309 16:26:09.649220 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 09 16:26:09.652690 master-0 kubenswrapper[7604]: I0309 16:26:09.650908 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 09 16:26:09.672457 master-0 kubenswrapper[7604]: I0309 16:26:09.666259 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml"] Mar 09 16:26:09.713278 master-0 kubenswrapper[7604]: I0309 16:26:09.712413 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5ddd5549bd-4wtqd"] Mar 09 16:26:09.713278 master-0 kubenswrapper[7604]: E0309 16:26:09.713120 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" podUID="3d071869-9372-4576-947f-520f9191abe3" Mar 09 16:26:09.735538 master-0 kubenswrapper[7604]: I0309 16:26:09.735372 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-var-lock\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.735538 master-0 kubenswrapper[7604]: I0309 16:26:09.735420 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.735538 master-0 kubenswrapper[7604]: I0309 16:26:09.735483 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jvl\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-kube-api-access-h8jvl\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.735913 master-0 kubenswrapper[7604]: I0309 16:26:09.735603 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-var-lock\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.735913 master-0 kubenswrapper[7604]: I0309 16:26:09.735785 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.735913 master-0 kubenswrapper[7604]: I0309 16:26:09.735834 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.735913 master-0 kubenswrapper[7604]: I0309 16:26:09.735867 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.735913 master-0 kubenswrapper[7604]: I0309 16:26:09.735899 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.736112 master-0 kubenswrapper[7604]: I0309 16:26:09.735971 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.736112 master-0 kubenswrapper[7604]: I0309 16:26:09.735996 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.736112 master-0 kubenswrapper[7604]: I0309 16:26:09.736049 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rjs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-kube-api-access-p8rjs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.736112 master-0 kubenswrapper[7604]: I0309 16:26:09.736098 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.736267 master-0 kubenswrapper[7604]: I0309 16:26:09.736169 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.736267 master-0 kubenswrapper[7604]: I0309 16:26:09.736205 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.737714 master-0 kubenswrapper[7604]: I0309 16:26:09.736768 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.737714 master-0 kubenswrapper[7604]: I0309 16:26:09.736842 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.737714 master-0 kubenswrapper[7604]: I0309 16:26:09.737211 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.737714 master-0 kubenswrapper[7604]: I0309 16:26:09.737294 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.737975 master-0 kubenswrapper[7604]: I0309 16:26:09.737895 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.749562 master-0 kubenswrapper[7604]: I0309 16:26:09.749519 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.838414 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.838657 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.838701 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.838743 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rjs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-kube-api-access-p8rjs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.838786 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.838915 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.839701 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.840326 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: E0309 16:26:09.840466 7604 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: E0309 16:26:09.840528 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs podName:4a2aa6f3-f049-423a-a8f5-5d33fc214a7b nodeName:}" failed. No retries permitted until 2026-03-09 16:26:10.340503867 +0000 UTC m=+27.394473290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-xrgml" (UID: "4a2aa6f3-f049-423a-a8f5-5d33fc214a7b") : secret "catalogserver-cert" not found Mar 09 16:26:09.843462 master-0 kubenswrapper[7604]: I0309 16:26:09.840850 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.861938 master-0 kubenswrapper[7604]: I0309 16:26:09.855501 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:09.862180 master-0 kubenswrapper[7604]: I0309 16:26:09.861852 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jvl\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-kube-api-access-h8jvl\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.884465 master-0 kubenswrapper[7604]: I0309 16:26:09.873875 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:09.884465 master-0 kubenswrapper[7604]: I0309 16:26:09.880332 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kube-api-access\") pod \"installer-1-master-0\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:09.924455 master-0 kubenswrapper[7604]: I0309 16:26:09.920157 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rjs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-kube-api-access-p8rjs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:10.093067 master-0 kubenswrapper[7604]: I0309 16:26:10.092973 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79b7b5f969-flxtt"] Mar 09 16:26:10.095809 master-0 kubenswrapper[7604]: E0309 16:26:10.093916 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" podUID="5821d180-5114-4f2a-93f3-02922538bef6" Mar 09 16:26:10.120458 master-0 kubenswrapper[7604]: I0309 16:26:10.112094 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4"] Mar 09 16:26:10.120458 master-0 kubenswrapper[7604]: E0309 16:26:10.112690 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" podUID="1d6c7cfb-2226-427a-99ee-d31f14aa975f" Mar 09 16:26:10.120458 master-0 kubenswrapper[7604]: I0309 16:26:10.112803 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:10.178459 master-0 kubenswrapper[7604]: I0309 16:26:10.171494 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb"] Mar 09 16:26:10.193457 master-0 kubenswrapper[7604]: W0309 16:26:10.185294 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc72e89f0_37ad_4515_89ba_ba1f52ba61f0.slice/crio-608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30 WatchSource:0}: Error finding container 608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30: Status 404 returned error can't find the container with id 608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30 Mar 09 16:26:10.223113 master-0 kubenswrapper[7604]: I0309 16:26:10.223059 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-6r6g2"] Mar 09 16:26:10.224300 master-0 kubenswrapper[7604]: I0309 16:26:10.224261 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.229471 master-0 kubenswrapper[7604]: I0309 16:26:10.229027 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 09 16:26:10.229753 master-0 kubenswrapper[7604]: I0309 16:26:10.229725 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 09 16:26:10.230126 master-0 kubenswrapper[7604]: I0309 16:26:10.230073 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 09 16:26:10.230661 master-0 kubenswrapper[7604]: I0309 16:26:10.230344 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 09 16:26:10.243912 master-0 kubenswrapper[7604]: I0309 16:26:10.242740 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-6r6g2"] Mar 09 16:26:10.349845 master-0 kubenswrapper[7604]: I0309 16:26:10.349696 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-cabundle\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.349845 master-0 kubenswrapper[7604]: I0309 16:26:10.349776 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:10.350136 master-0 kubenswrapper[7604]: I0309 16:26:10.349933 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-key\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.350136 master-0 kubenswrapper[7604]: I0309 16:26:10.350007 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqdm\" (UniqueName: \"kubernetes.io/projected/af4aa8d4-09e1-4589-b7bf-885617a11337-kube-api-access-whqdm\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.350475 master-0 kubenswrapper[7604]: E0309 16:26:10.350393 7604 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 09 16:26:10.350538 master-0 kubenswrapper[7604]: E0309 16:26:10.350514 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs podName:4a2aa6f3-f049-423a-a8f5-5d33fc214a7b nodeName:}" failed. No retries permitted until 2026-03-09 16:26:11.350487164 +0000 UTC m=+28.404456587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-xrgml" (UID: "4a2aa6f3-f049-423a-a8f5-5d33fc214a7b") : secret "catalogserver-cert" not found Mar 09 16:26:10.374960 master-0 kubenswrapper[7604]: I0309 16:26:10.374888 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 09 16:26:10.394590 master-0 kubenswrapper[7604]: I0309 16:26:10.394531 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" event={"ID":"c72e89f0-37ad-4515-89ba-ba1f52ba61f0","Type":"ContainerStarted","Data":"608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30"} Mar 09 16:26:10.399933 master-0 kubenswrapper[7604]: I0309 16:26:10.399860 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerStarted","Data":"ca16434e380b6db2be43284967084d34f8d84b54a570fafe10c2de9a729bf691"} Mar 09 16:26:10.402820 master-0 kubenswrapper[7604]: I0309 16:26:10.402750 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:10.403214 master-0 kubenswrapper[7604]: I0309 16:26:10.403162 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:10.403374 master-0 kubenswrapper[7604]: I0309 16:26:10.403329 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerStarted","Data":"08a50d026ef459ff3233ee74fc8df1d0208854ef10f3f9cdd3c02dba9aa4e4f2"} Mar 09 16:26:10.405025 master-0 kubenswrapper[7604]: I0309 16:26:10.404348 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:26:10.416623 master-0 kubenswrapper[7604]: I0309 16:26:10.416568 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:10.424850 master-0 kubenswrapper[7604]: I0309 16:26:10.424757 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:26:10.451161 master-0 kubenswrapper[7604]: I0309 16:26:10.451095 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-key\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.451712 master-0 kubenswrapper[7604]: I0309 16:26:10.451538 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqdm\" (UniqueName: \"kubernetes.io/projected/af4aa8d4-09e1-4589-b7bf-885617a11337-kube-api-access-whqdm\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.451906 master-0 kubenswrapper[7604]: I0309 16:26:10.451856 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-cabundle\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.453347 master-0 kubenswrapper[7604]: I0309 16:26:10.453046 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-cabundle\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.456914 master-0 kubenswrapper[7604]: I0309 16:26:10.456835 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:10.463257 master-0 kubenswrapper[7604]: I0309 16:26:10.463187 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-key\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.471829 master-0 kubenswrapper[7604]: I0309 16:26:10.471779 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:26:10.472288 master-0 kubenswrapper[7604]: I0309 16:26:10.472192 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podStartSLOduration=2.231033154 podStartE2EDuration="11.472166817s" podCreationTimestamp="2026-03-09 16:25:59 +0000 UTC" firstStartedPulling="2026-03-09 16:26:00.386014998 +0000 UTC m=+17.439984421" lastFinishedPulling="2026-03-09 16:26:09.627148661 +0000 UTC m=+26.681118084" observedRunningTime="2026-03-09 16:26:10.467953089 +0000 UTC m=+27.521922502" watchObservedRunningTime="2026-03-09 16:26:10.472166817 +0000 UTC m=+27.526136240" Mar 09 16:26:10.511059 master-0 kubenswrapper[7604]: I0309 16:26:10.510540 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqdm\" (UniqueName: \"kubernetes.io/projected/af4aa8d4-09e1-4589-b7bf-885617a11337-kube-api-access-whqdm\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553649 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-config\") pod \"5821d180-5114-4f2a-93f3-02922538bef6\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553729 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-encryption-config\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553764 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-image-import-ca\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553799 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-node-pullsecrets\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553838 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkkcq\" (UniqueName: \"kubernetes.io/projected/3d071869-9372-4576-947f-520f9191abe3-kube-api-access-bkkcq\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553873 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.553993 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-proxy-ca-bundles\") pod \"5821d180-5114-4f2a-93f3-02922538bef6\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.554031 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-trusted-ca-bundle\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.554072 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfmj8\" (UniqueName: \"kubernetes.io/projected/5821d180-5114-4f2a-93f3-02922538bef6-kube-api-access-xfmj8\") pod \"5821d180-5114-4f2a-93f3-02922538bef6\" (UID: \"5821d180-5114-4f2a-93f3-02922538bef6\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.554101 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-audit-dir\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.554135 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-etcd-serving-ca\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.554164 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7xnb\" (UniqueName: \"kubernetes.io/projected/1d6c7cfb-2226-427a-99ee-d31f14aa975f-kube-api-access-r7xnb\") pod \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.554446 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.555582 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.556060 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.553827 master-0 kubenswrapper[7604]: I0309 16:26:10.556846 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.558636 master-0 kubenswrapper[7604]: I0309 16:26:10.557253 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-config" (OuterVolumeSpecName: "config") pod "5821d180-5114-4f2a-93f3-02922538bef6" (UID: "5821d180-5114-4f2a-93f3-02922538bef6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.558636 master-0 kubenswrapper[7604]: I0309 16:26:10.557919 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5821d180-5114-4f2a-93f3-02922538bef6" (UID: "5821d180-5114-4f2a-93f3-02922538bef6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.558636 master-0 kubenswrapper[7604]: I0309 16:26:10.558578 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.560083 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.561205 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5821d180-5114-4f2a-93f3-02922538bef6-kube-api-access-xfmj8" (OuterVolumeSpecName: "kube-api-access-xfmj8") pod "5821d180-5114-4f2a-93f3-02922538bef6" (UID: "5821d180-5114-4f2a-93f3-02922538bef6"). InnerVolumeSpecName "kube-api-access-xfmj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.561732 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-config\") pod \"3d071869-9372-4576-947f-520f9191abe3\" (UID: \"3d071869-9372-4576-947f-520f9191abe3\") " Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.561760 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-config\") pod \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\" (UID: \"1d6c7cfb-2226-427a-99ee-d31f14aa975f\") " Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562237 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562248 7604 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562258 7604 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562268 7604 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562277 7604 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562286 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfmj8\" (UniqueName: \"kubernetes.io/projected/5821d180-5114-4f2a-93f3-02922538bef6-kube-api-access-xfmj8\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562296 7604 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d071869-9372-4576-947f-520f9191abe3-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.562298 master-0 kubenswrapper[7604]: I0309 16:26:10.562305 7604 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.563117 master-0 kubenswrapper[7604]: I0309 16:26:10.563029 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:26:10.563251 master-0 kubenswrapper[7604]: I0309 16:26:10.563193 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-config" (OuterVolumeSpecName: "config") pod "1d6c7cfb-2226-427a-99ee-d31f14aa975f" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.563455 master-0 kubenswrapper[7604]: I0309 16:26:10.563416 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-config" (OuterVolumeSpecName: "config") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:10.574937 master-0 kubenswrapper[7604]: I0309 16:26:10.566834 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d071869-9372-4576-947f-520f9191abe3-kube-api-access-bkkcq" (OuterVolumeSpecName: "kube-api-access-bkkcq") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "kube-api-access-bkkcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:10.574937 master-0 kubenswrapper[7604]: I0309 16:26:10.569231 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "3d071869-9372-4576-947f-520f9191abe3" (UID: "3d071869-9372-4576-947f-520f9191abe3"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:26:10.575418 master-0 kubenswrapper[7604]: I0309 16:26:10.575324 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6c7cfb-2226-427a-99ee-d31f14aa975f-kube-api-access-r7xnb" (OuterVolumeSpecName: "kube-api-access-r7xnb") pod "1d6c7cfb-2226-427a-99ee-d31f14aa975f" (UID: "1d6c7cfb-2226-427a-99ee-d31f14aa975f"). InnerVolumeSpecName "kube-api-access-r7xnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:10.664614 master-0 kubenswrapper[7604]: I0309 16:26:10.664560 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7xnb\" (UniqueName: \"kubernetes.io/projected/1d6c7cfb-2226-427a-99ee-d31f14aa975f-kube-api-access-r7xnb\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.664876 master-0 kubenswrapper[7604]: I0309 16:26:10.664629 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.664876 master-0 kubenswrapper[7604]: I0309 16:26:10.664644 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.664876 master-0 kubenswrapper[7604]: I0309 16:26:10.664656 7604 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.664876 master-0 kubenswrapper[7604]: I0309 16:26:10.664669 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkkcq\" (UniqueName: \"kubernetes.io/projected/3d071869-9372-4576-947f-520f9191abe3-kube-api-access-bkkcq\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.664876 master-0 kubenswrapper[7604]: I0309 16:26:10.664724 7604 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:10.800916 master-0 kubenswrapper[7604]: I0309 16:26:10.799718 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 09 16:26:10.800916 master-0 kubenswrapper[7604]: I0309 16:26:10.800305 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:10.803511 master-0 kubenswrapper[7604]: W0309 16:26:10.803398 7604 reflector.go:561] object-"openshift-kube-scheduler"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-scheduler": no relationship found between node 'master-0' and this object Mar 09 16:26:10.803511 master-0 kubenswrapper[7604]: E0309 16:26:10.803467 7604 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-scheduler\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 09 16:26:10.814988 master-0 kubenswrapper[7604]: I0309 16:26:10.814946 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-6r6g2"] Mar 09 16:26:10.829167 master-0 kubenswrapper[7604]: I0309 16:26:10.828929 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 09 16:26:10.968078 master-0 kubenswrapper[7604]: I0309 16:26:10.967606 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:10.968320 master-0 kubenswrapper[7604]: I0309 16:26:10.968093 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103a81df-6dfb-42d3-bc03-4391681c3e35-kube-api-access\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:10.968320 master-0 kubenswrapper[7604]: I0309 16:26:10.968223 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-var-lock\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.070105 master-0 kubenswrapper[7604]: I0309 16:26:11.069957 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-var-lock\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.070300 master-0 kubenswrapper[7604]: I0309 16:26:11.070158 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.070300 master-0 kubenswrapper[7604]: I0309 16:26:11.070210 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103a81df-6dfb-42d3-bc03-4391681c3e35-kube-api-access\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.070645 master-0 kubenswrapper[7604]: I0309 16:26:11.070532 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-var-lock\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.070645 master-0 kubenswrapper[7604]: I0309 16:26:11.070622 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.117862 master-0 kubenswrapper[7604]: I0309 16:26:11.117804 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7"] Mar 09 16:26:11.118700 master-0 kubenswrapper[7604]: I0309 16:26:11.118674 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.133652 master-0 kubenswrapper[7604]: I0309 16:26:11.133592 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 09 16:26:11.133832 master-0 kubenswrapper[7604]: I0309 16:26:11.133617 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 09 16:26:11.134249 master-0 kubenswrapper[7604]: I0309 16:26:11.134124 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 09 16:26:11.134783 master-0 kubenswrapper[7604]: I0309 16:26:11.134757 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 09 16:26:11.135051 master-0 kubenswrapper[7604]: I0309 16:26:11.135018 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 09 16:26:11.155062 master-0 kubenswrapper[7604]: I0309 16:26:11.154342 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 09 16:26:11.155062 master-0 kubenswrapper[7604]: I0309 16:26:11.154486 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 09 16:26:11.155062 master-0 kubenswrapper[7604]: I0309 16:26:11.154851 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 09 16:26:11.156904 master-0 kubenswrapper[7604]: I0309 16:26:11.156865 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7"] Mar 09 16:26:11.273087 master-0 kubenswrapper[7604]: I0309 16:26:11.273022 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl5kt\" (UniqueName: \"kubernetes.io/projected/8c93fb5d-373d-4473-99dd-50e4398bafbf-kube-api-access-nl5kt\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273304 master-0 kubenswrapper[7604]: I0309 16:26:11.273095 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-policies\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273304 master-0 kubenswrapper[7604]: I0309 16:26:11.273184 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-client\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273304 master-0 kubenswrapper[7604]: I0309 16:26:11.273209 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-trusted-ca-bundle\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273304 master-0 kubenswrapper[7604]: I0309 16:26:11.273269 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-encryption-config\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273304 master-0 kubenswrapper[7604]: I0309 16:26:11.273295 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-dir\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273548 master-0 kubenswrapper[7604]: I0309 16:26:11.273309 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.273548 master-0 kubenswrapper[7604]: I0309 16:26:11.273329 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-serving-ca\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.374196 master-0 kubenswrapper[7604]: I0309 16:26:11.374047 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-encryption-config\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.374196 master-0 kubenswrapper[7604]: I0309 16:26:11.374133 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:11.374547 master-0 kubenswrapper[7604]: I0309 16:26:11.374463 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-dir\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.374621 master-0 kubenswrapper[7604]: E0309 16:26:11.374568 7604 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 09 16:26:11.374664 master-0 kubenswrapper[7604]: E0309 16:26:11.374637 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs podName:4a2aa6f3-f049-423a-a8f5-5d33fc214a7b nodeName:}" failed. No retries permitted until 2026-03-09 16:26:13.374614554 +0000 UTC m=+30.428584157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-xrgml" (UID: "4a2aa6f3-f049-423a-a8f5-5d33fc214a7b") : secret "catalogserver-cert" not found Mar 09 16:26:11.374715 master-0 kubenswrapper[7604]: I0309 16:26:11.374664 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.374715 master-0 kubenswrapper[7604]: I0309 16:26:11.374697 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-serving-ca\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.374954 master-0 kubenswrapper[7604]: E0309 16:26:11.374901 7604 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:11.375034 master-0 kubenswrapper[7604]: E0309 16:26:11.375008 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert podName:8c93fb5d-373d-4473-99dd-50e4398bafbf nodeName:}" failed. No retries permitted until 2026-03-09 16:26:11.874974894 +0000 UTC m=+28.928944317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert") pod "apiserver-dc6bb954d-kxhv7" (UID: "8c93fb5d-373d-4473-99dd-50e4398bafbf") : secret "serving-cert" not found Mar 09 16:26:11.375203 master-0 kubenswrapper[7604]: I0309 16:26:11.375169 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl5kt\" (UniqueName: \"kubernetes.io/projected/8c93fb5d-373d-4473-99dd-50e4398bafbf-kube-api-access-nl5kt\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.375332 master-0 kubenswrapper[7604]: I0309 16:26:11.375298 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-policies\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.375849 master-0 kubenswrapper[7604]: I0309 16:26:11.375407 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-dir\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.376447 master-0 kubenswrapper[7604]: I0309 16:26:11.375496 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-client\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.377522 master-0 kubenswrapper[7604]: I0309 16:26:11.376251 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-serving-ca\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.377522 master-0 kubenswrapper[7604]: I0309 16:26:11.376333 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-policies\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.377522 master-0 kubenswrapper[7604]: I0309 16:26:11.376573 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-trusted-ca-bundle\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.377522 master-0 kubenswrapper[7604]: I0309 16:26:11.376987 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-trusted-ca-bundle\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.386456 master-0 kubenswrapper[7604]: I0309 16:26:11.383550 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-encryption-config\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.386456 master-0 kubenswrapper[7604]: I0309 16:26:11.383555 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-client\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.395853 master-0 kubenswrapper[7604]: I0309 16:26:11.395781 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl5kt\" (UniqueName: \"kubernetes.io/projected/8c93fb5d-373d-4473-99dd-50e4398bafbf-kube-api-access-nl5kt\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.411580 master-0 kubenswrapper[7604]: I0309 16:26:11.411444 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"07aaf428-5040-4e75-9c0d-e092d0b2c2f3","Type":"ContainerStarted","Data":"416bfbec5030b68d4b4837b781967c573c06ae0b5142f97eb8ad1a431a641798"} Mar 09 16:26:11.411580 master-0 kubenswrapper[7604]: I0309 16:26:11.411531 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"07aaf428-5040-4e75-9c0d-e092d0b2c2f3","Type":"ContainerStarted","Data":"9901d0aaf4b1546909e7fc4c6fcee79bdbe51cd6dd0be1d8dfa8048b9232cb38"} Mar 09 16:26:11.413329 master-0 kubenswrapper[7604]: I0309 16:26:11.413281 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" event={"ID":"af4aa8d4-09e1-4589-b7bf-885617a11337","Type":"ContainerStarted","Data":"f2698e39e3b5a035604353ee09cee0739a68806bc558360103357b0dbe104e2f"} Mar 09 16:26:11.413415 master-0 kubenswrapper[7604]: I0309 16:26:11.413331 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" event={"ID":"af4aa8d4-09e1-4589-b7bf-885617a11337","Type":"ContainerStarted","Data":"d7bb1ade7135b46fd5c4d6dd8420520ed7e496d3520bdd197b24cd39361e4974"} Mar 09 16:26:11.416487 master-0 kubenswrapper[7604]: I0309 16:26:11.416453 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4" Mar 09 16:26:11.416641 master-0 kubenswrapper[7604]: I0309 16:26:11.416531 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" event={"ID":"c72e89f0-37ad-4515-89ba-ba1f52ba61f0","Type":"ContainerStarted","Data":"a3018cc2d20ad1fcb82713903791fc74bac4951ff60c4b8d58740606d8bcbc26"} Mar 09 16:26:11.416641 master-0 kubenswrapper[7604]: I0309 16:26:11.416577 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" event={"ID":"c72e89f0-37ad-4515-89ba-ba1f52ba61f0","Type":"ContainerStarted","Data":"eb0d4a5cd6b917ab3136d6670a91daed3539d6022e53b4e8f77735bc48ef873e"} Mar 09 16:26:11.416746 master-0 kubenswrapper[7604]: I0309 16:26:11.416641 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5ddd5549bd-4wtqd" Mar 09 16:26:11.417114 master-0 kubenswrapper[7604]: I0309 16:26:11.417082 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b7b5f969-flxtt" Mar 09 16:26:11.541458 master-0 kubenswrapper[7604]: I0309 16:26:11.541167 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.541145116 podStartE2EDuration="2.541145116s" podCreationTimestamp="2026-03-09 16:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:11.540730034 +0000 UTC m=+28.594699467" watchObservedRunningTime="2026-03-09 16:26:11.541145116 +0000 UTC m=+28.595114539" Mar 09 16:26:11.612980 master-0 kubenswrapper[7604]: I0309 16:26:11.612925 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79b7b5f969-flxtt"] Mar 09 16:26:11.619397 master-0 kubenswrapper[7604]: I0309 16:26:11.619353 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79b49b464c-hl85g"] Mar 09 16:26:11.620350 master-0 kubenswrapper[7604]: I0309 16:26:11.620327 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.621437 master-0 kubenswrapper[7604]: I0309 16:26:11.621315 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79b7b5f969-flxtt"] Mar 09 16:26:11.625548 master-0 kubenswrapper[7604]: I0309 16:26:11.624973 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:26:11.625548 master-0 kubenswrapper[7604]: I0309 16:26:11.625284 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:26:11.625707 master-0 kubenswrapper[7604]: I0309 16:26:11.625668 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:26:11.625707 master-0 kubenswrapper[7604]: I0309 16:26:11.625679 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:26:11.626029 master-0 kubenswrapper[7604]: I0309 16:26:11.625963 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:26:11.637085 master-0 kubenswrapper[7604]: I0309 16:26:11.632694 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:26:11.652295 master-0 kubenswrapper[7604]: I0309 16:26:11.651414 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b49b464c-hl85g"] Mar 09 16:26:11.652295 master-0 kubenswrapper[7604]: I0309 16:26:11.651724 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" podStartSLOduration=1.651705557 podStartE2EDuration="1.651705557s" podCreationTimestamp="2026-03-09 16:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:11.651278816 +0000 UTC m=+28.705248239" watchObservedRunningTime="2026-03-09 16:26:11.651705557 +0000 UTC m=+28.705674980" Mar 09 16:26:11.655504 master-0 kubenswrapper[7604]: I0309 16:26:11.655362 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 09 16:26:11.664845 master-0 kubenswrapper[7604]: I0309 16:26:11.664221 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103a81df-6dfb-42d3-bc03-4391681c3e35-kube-api-access\") pod \"installer-1-master-0\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.683009 master-0 kubenswrapper[7604]: I0309 16:26:11.682332 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5ddd5549bd-4wtqd"] Mar 09 16:26:11.693362 master-0 kubenswrapper[7604]: I0309 16:26:11.693324 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-5ddd5549bd-4wtqd"] Mar 09 16:26:11.727101 master-0 kubenswrapper[7604]: I0309 16:26:11.727044 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4"] Mar 09 16:26:11.752332 master-0 kubenswrapper[7604]: I0309 16:26:11.752279 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:11.780657 master-0 kubenswrapper[7604]: I0309 16:26:11.780607 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65959ff4c9-fh2s4"] Mar 09 16:26:11.783351 master-0 kubenswrapper[7604]: I0309 16:26:11.783303 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.783448 master-0 kubenswrapper[7604]: I0309 16:26:11.783383 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-config\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.783815 master-0 kubenswrapper[7604]: I0309 16:26:11.783523 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvhv\" (UniqueName: \"kubernetes.io/projected/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-kube-api-access-6rvhv\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.783815 master-0 kubenswrapper[7604]: I0309 16:26:11.783584 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-proxy-ca-bundles\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.783815 master-0 kubenswrapper[7604]: I0309 16:26:11.783776 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-client-ca\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.783972 master-0 kubenswrapper[7604]: I0309 16:26:11.783881 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d071869-9372-4576-947f-520f9191abe3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:11.783972 master-0 kubenswrapper[7604]: I0309 16:26:11.783904 7604 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d071869-9372-4576-947f-520f9191abe3-audit\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:11.783972 master-0 kubenswrapper[7604]: I0309 16:26:11.783917 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5821d180-5114-4f2a-93f3-02922538bef6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:11.783972 master-0 kubenswrapper[7604]: I0309 16:26:11.783930 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5821d180-5114-4f2a-93f3-02922538bef6-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:11.816386 master-0 kubenswrapper[7604]: I0309 16:26:11.816310 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" podStartSLOduration=2.816292795 podStartE2EDuration="2.816292795s" podCreationTimestamp="2026-03-09 16:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:11.815028199 +0000 UTC m=+28.868997642" watchObservedRunningTime="2026-03-09 16:26:11.816292795 +0000 UTC m=+28.870262218" Mar 09 16:26:11.892101 master-0 kubenswrapper[7604]: I0309 16:26:11.891952 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.892101 master-0 kubenswrapper[7604]: I0309 16:26:11.892030 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-config\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.892101 master-0 kubenswrapper[7604]: I0309 16:26:11.892087 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:11.892509 master-0 kubenswrapper[7604]: I0309 16:26:11.892123 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvhv\" (UniqueName: \"kubernetes.io/projected/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-kube-api-access-6rvhv\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.892509 master-0 kubenswrapper[7604]: I0309 16:26:11.892155 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-proxy-ca-bundles\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.892509 master-0 kubenswrapper[7604]: I0309 16:26:11.892278 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-client-ca\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.892509 master-0 kubenswrapper[7604]: I0309 16:26:11.892472 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d6c7cfb-2226-427a-99ee-d31f14aa975f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:11.892509 master-0 kubenswrapper[7604]: I0309 16:26:11.892492 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d6c7cfb-2226-427a-99ee-d31f14aa975f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:11.893580 master-0 kubenswrapper[7604]: I0309 16:26:11.893548 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-client-ca\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.893712 master-0 kubenswrapper[7604]: E0309 16:26:11.893673 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:11.893764 master-0 kubenswrapper[7604]: E0309 16:26:11.893745 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert podName:44c4c1a8-aa94-44b7-9f21-3a55a59dcb62 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:12.393724797 +0000 UTC m=+29.447694220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert") pod "controller-manager-79b49b464c-hl85g" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62") : secret "serving-cert" not found Mar 09 16:26:11.894856 master-0 kubenswrapper[7604]: I0309 16:26:11.894822 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-config\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.894954 master-0 kubenswrapper[7604]: E0309 16:26:11.894932 7604 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:11.894994 master-0 kubenswrapper[7604]: E0309 16:26:11.894982 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert podName:8c93fb5d-373d-4473-99dd-50e4398bafbf nodeName:}" failed. No retries permitted until 2026-03-09 16:26:12.894969272 +0000 UTC m=+29.948938695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert") pod "apiserver-dc6bb954d-kxhv7" (UID: "8c93fb5d-373d-4473-99dd-50e4398bafbf") : secret "serving-cert" not found Mar 09 16:26:11.906359 master-0 kubenswrapper[7604]: I0309 16:26:11.904347 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-proxy-ca-bundles\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:11.913670 master-0 kubenswrapper[7604]: I0309 16:26:11.912877 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvhv\" (UniqueName: \"kubernetes.io/projected/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-kube-api-access-6rvhv\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:12.030722 master-0 kubenswrapper[7604]: I0309 16:26:12.030647 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 09 16:26:12.036389 master-0 kubenswrapper[7604]: W0309 16:26:12.036142 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod103a81df_6dfb_42d3_bc03_4391681c3e35.slice/crio-0e8cad4e52fb5c35bce0a53f2e1987cc8c806e677f3567f9d359feddc29333f6 WatchSource:0}: Error finding container 0e8cad4e52fb5c35bce0a53f2e1987cc8c806e677f3567f9d359feddc29333f6: Status 404 returned error can't find the container with id 0e8cad4e52fb5c35bce0a53f2e1987cc8c806e677f3567f9d359feddc29333f6 Mar 09 16:26:12.400451 master-0 kubenswrapper[7604]: I0309 16:26:12.400356 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:12.400722 master-0 kubenswrapper[7604]: E0309 16:26:12.400655 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:12.400852 master-0 kubenswrapper[7604]: E0309 16:26:12.400815 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert podName:44c4c1a8-aa94-44b7-9f21-3a55a59dcb62 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:13.400782331 +0000 UTC m=+30.454751754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert") pod "controller-manager-79b49b464c-hl85g" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62") : secret "serving-cert" not found Mar 09 16:26:12.422649 master-0 kubenswrapper[7604]: I0309 16:26:12.422581 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"103a81df-6dfb-42d3-bc03-4391681c3e35","Type":"ContainerStarted","Data":"a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6"} Mar 09 16:26:12.422649 master-0 kubenswrapper[7604]: I0309 16:26:12.422647 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"103a81df-6dfb-42d3-bc03-4391681c3e35","Type":"ContainerStarted","Data":"0e8cad4e52fb5c35bce0a53f2e1987cc8c806e677f3567f9d359feddc29333f6"} Mar 09 16:26:12.422965 master-0 kubenswrapper[7604]: I0309 16:26:12.422839 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:12.468895 master-0 kubenswrapper[7604]: I0309 16:26:12.468830 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.468814231 podStartE2EDuration="2.468814231s" podCreationTimestamp="2026-03-09 16:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:12.467455642 +0000 UTC m=+29.521425075" watchObservedRunningTime="2026-03-09 16:26:12.468814231 +0000 UTC m=+29.522783654" Mar 09 16:26:12.912996 master-0 kubenswrapper[7604]: I0309 16:26:12.912854 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:12.913816 master-0 kubenswrapper[7604]: E0309 16:26:12.913122 7604 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:12.913816 master-0 kubenswrapper[7604]: E0309 16:26:12.913242 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert podName:8c93fb5d-373d-4473-99dd-50e4398bafbf nodeName:}" failed. No retries permitted until 2026-03-09 16:26:14.913215127 +0000 UTC m=+31.967184700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert") pod "apiserver-dc6bb954d-kxhv7" (UID: "8c93fb5d-373d-4473-99dd-50e4398bafbf") : secret "serving-cert" not found Mar 09 16:26:13.118017 master-0 kubenswrapper[7604]: I0309 16:26:13.117539 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6c7cfb-2226-427a-99ee-d31f14aa975f" path="/var/lib/kubelet/pods/1d6c7cfb-2226-427a-99ee-d31f14aa975f/volumes" Mar 09 16:26:13.118517 master-0 kubenswrapper[7604]: I0309 16:26:13.118484 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d071869-9372-4576-947f-520f9191abe3" path="/var/lib/kubelet/pods/3d071869-9372-4576-947f-520f9191abe3/volumes" Mar 09 16:26:13.118910 master-0 kubenswrapper[7604]: I0309 16:26:13.118881 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5821d180-5114-4f2a-93f3-02922538bef6" path="/var/lib/kubelet/pods/5821d180-5114-4f2a-93f3-02922538bef6/volumes" Mar 09 16:26:13.422370 master-0 kubenswrapper[7604]: I0309 16:26:13.422301 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:13.422616 master-0 kubenswrapper[7604]: I0309 16:26:13.422389 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:13.422871 master-0 kubenswrapper[7604]: E0309 16:26:13.422836 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:13.423062 master-0 kubenswrapper[7604]: E0309 16:26:13.423049 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert podName:44c4c1a8-aa94-44b7-9f21-3a55a59dcb62 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.423002789 +0000 UTC m=+32.476972212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert") pod "controller-manager-79b49b464c-hl85g" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62") : secret "serving-cert" not found Mar 09 16:26:13.434899 master-0 kubenswrapper[7604]: I0309 16:26:13.434084 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:13.450457 master-0 kubenswrapper[7604]: I0309 16:26:13.449872 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:26:13.607543 master-0 kubenswrapper[7604]: I0309 16:26:13.605830 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:13.776474 master-0 kubenswrapper[7604]: I0309 16:26:13.773356 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-67495f79c-bcblv"] Mar 09 16:26:13.776474 master-0 kubenswrapper[7604]: I0309 16:26:13.774864 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.776474 master-0 kubenswrapper[7604]: I0309 16:26:13.775199 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k"] Mar 09 16:26:13.776474 master-0 kubenswrapper[7604]: I0309 16:26:13.776174 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.781463 master-0 kubenswrapper[7604]: I0309 16:26:13.781289 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 09 16:26:13.781624 master-0 kubenswrapper[7604]: I0309 16:26:13.781603 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 09 16:26:13.784467 master-0 kubenswrapper[7604]: I0309 16:26:13.781750 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 09 16:26:13.784467 master-0 kubenswrapper[7604]: I0309 16:26:13.781874 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 09 16:26:13.784467 master-0 kubenswrapper[7604]: I0309 16:26:13.781983 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 09 16:26:13.784467 master-0 kubenswrapper[7604]: I0309 16:26:13.782080 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 09 16:26:13.791456 master-0 kubenswrapper[7604]: I0309 16:26:13.790001 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67495f79c-bcblv"] Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.792124 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.792185 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.792270 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.792792 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k"] Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.792967 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.793058 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.793628 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.793953 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.794078 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:26:13.797453 master-0 kubenswrapper[7604]: I0309 16:26:13.796696 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830536 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5mf6\" (UniqueName: \"kubernetes.io/projected/442202b9-edf6-4d40-85e9-348b7bbe56e3-kube-api-access-h5mf6\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830623 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830655 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-trusted-ca-bundle\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830675 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit-dir\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830712 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-client\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830729 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-client-ca\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830776 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-image-import-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830802 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830822 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-encryption-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830842 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-config\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830868 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830904 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-node-pullsecrets\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830936 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-serving-cert\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.830964 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc2t2\" (UniqueName: \"kubernetes.io/projected/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-kube-api-access-kc2t2\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.831461 master-0 kubenswrapper[7604]: I0309 16:26:13.831006 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-serving-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.932533 master-0 kubenswrapper[7604]: I0309 16:26:13.932126 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-image-import-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.932533 master-0 kubenswrapper[7604]: I0309 16:26:13.932192 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.932533 master-0 kubenswrapper[7604]: I0309 16:26:13.932214 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-encryption-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.932533 master-0 kubenswrapper[7604]: I0309 16:26:13.932232 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-config\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.932533 master-0 kubenswrapper[7604]: I0309 16:26:13.932250 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.932739 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-node-pullsecrets\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.932843 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-node-pullsecrets\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.932983 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-serving-cert\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.933015 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2t2\" (UniqueName: \"kubernetes.io/projected/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-kube-api-access-kc2t2\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.933270 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-serving-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.933360 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5mf6\" (UniqueName: \"kubernetes.io/projected/442202b9-edf6-4d40-85e9-348b7bbe56e3-kube-api-access-h5mf6\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.933456 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.933468 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-config\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.933790 master-0 kubenswrapper[7604]: I0309 16:26:13.933501 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-trusted-ca-bundle\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.934097 master-0 kubenswrapper[7604]: I0309 16:26:13.933867 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit-dir\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.934097 master-0 kubenswrapper[7604]: E0309 16:26:13.933936 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:13.934097 master-0 kubenswrapper[7604]: I0309 16:26:13.933966 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-client\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.934097 master-0 kubenswrapper[7604]: E0309 16:26:13.934016 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert podName:442202b9-edf6-4d40-85e9-348b7bbe56e3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:14.433980044 +0000 UTC m=+31.487949467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert") pod "route-controller-manager-7fbb6944d8-sbv7k" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3") : secret "serving-cert" not found Mar 09 16:26:13.934097 master-0 kubenswrapper[7604]: I0309 16:26:13.934038 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-client-ca\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.936287 master-0 kubenswrapper[7604]: I0309 16:26:13.934485 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.936287 master-0 kubenswrapper[7604]: I0309 16:26:13.934555 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit-dir\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.936287 master-0 kubenswrapper[7604]: I0309 16:26:13.935025 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-client-ca\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:13.941509 master-0 kubenswrapper[7604]: I0309 16:26:13.938935 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-encryption-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.941509 master-0 kubenswrapper[7604]: I0309 16:26:13.939394 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.941509 master-0 kubenswrapper[7604]: I0309 16:26:13.939694 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-image-import-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.941509 master-0 kubenswrapper[7604]: I0309 16:26:13.940793 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-serving-cert\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.941509 master-0 kubenswrapper[7604]: I0309 16:26:13.940867 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-client\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.941509 master-0 kubenswrapper[7604]: I0309 16:26:13.940798 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-serving-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.942340 master-0 kubenswrapper[7604]: I0309 16:26:13.942296 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-trusted-ca-bundle\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.954468 master-0 kubenswrapper[7604]: I0309 16:26:13.954349 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml"] Mar 09 16:26:13.954743 master-0 kubenswrapper[7604]: I0309 16:26:13.954707 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2t2\" (UniqueName: \"kubernetes.io/projected/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-kube-api-access-kc2t2\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:13.959357 master-0 kubenswrapper[7604]: I0309 16:26:13.959297 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5mf6\" (UniqueName: \"kubernetes.io/projected/442202b9-edf6-4d40-85e9-348b7bbe56e3-kube-api-access-h5mf6\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:14.127496 master-0 kubenswrapper[7604]: I0309 16:26:14.127406 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:14.320537 master-0 kubenswrapper[7604]: I0309 16:26:14.319924 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-67495f79c-bcblv"] Mar 09 16:26:14.347106 master-0 kubenswrapper[7604]: W0309 16:26:14.347043 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217473c4_ef8f_4f4f_bce9_e92d5cc1e5b8.slice/crio-b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e WatchSource:0}: Error finding container b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e: Status 404 returned error can't find the container with id b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e Mar 09 16:26:14.443113 master-0 kubenswrapper[7604]: I0309 16:26:14.443058 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:14.443471 master-0 kubenswrapper[7604]: E0309 16:26:14.443304 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:14.443508 master-0 kubenswrapper[7604]: E0309 16:26:14.443481 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert podName:442202b9-edf6-4d40-85e9-348b7bbe56e3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:15.443462827 +0000 UTC m=+32.497432250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert") pod "route-controller-manager-7fbb6944d8-sbv7k" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3") : secret "serving-cert" not found Mar 09 16:26:14.447037 master-0 kubenswrapper[7604]: I0309 16:26:14.446741 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67495f79c-bcblv" event={"ID":"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8","Type":"ContainerStarted","Data":"b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e"} Mar 09 16:26:14.453843 master-0 kubenswrapper[7604]: I0309 16:26:14.453781 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" event={"ID":"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b","Type":"ContainerStarted","Data":"4fc5ebe625ed54c3d67f7a4689964a54c61c83f3612ec773524ffd6c73856293"} Mar 09 16:26:14.453979 master-0 kubenswrapper[7604]: I0309 16:26:14.453850 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" event={"ID":"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b","Type":"ContainerStarted","Data":"0db2ff86863610cd35f22607d00ebf639a464870bd45fa4d2fdcd1d0d766b907"} Mar 09 16:26:14.453979 master-0 kubenswrapper[7604]: I0309 16:26:14.453868 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" event={"ID":"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b","Type":"ContainerStarted","Data":"1d2a3afb8eb1e0a8c25b36f8e7877fb572cd427c87f5ea499b36180c2a18273c"} Mar 09 16:26:14.454252 master-0 kubenswrapper[7604]: I0309 16:26:14.454219 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:14.570595 master-0 kubenswrapper[7604]: I0309 16:26:14.570268 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" podStartSLOduration=5.570240994 podStartE2EDuration="5.570240994s" podCreationTimestamp="2026-03-09 16:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:14.568448473 +0000 UTC m=+31.622417906" watchObservedRunningTime="2026-03-09 16:26:14.570240994 +0000 UTC m=+31.624210417" Mar 09 16:26:14.948234 master-0 kubenswrapper[7604]: I0309 16:26:14.948164 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:14.949093 master-0 kubenswrapper[7604]: E0309 16:26:14.948362 7604 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:14.949093 master-0 kubenswrapper[7604]: E0309 16:26:14.948483 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert podName:8c93fb5d-373d-4473-99dd-50e4398bafbf nodeName:}" failed. No retries permitted until 2026-03-09 16:26:18.948458214 +0000 UTC m=+36.002427637 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert") pod "apiserver-dc6bb954d-kxhv7" (UID: "8c93fb5d-373d-4473-99dd-50e4398bafbf") : secret "serving-cert" not found Mar 09 16:26:15.456746 master-0 kubenswrapper[7604]: I0309 16:26:15.456128 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:15.456746 master-0 kubenswrapper[7604]: E0309 16:26:15.456269 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:15.456746 master-0 kubenswrapper[7604]: E0309 16:26:15.456336 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert podName:442202b9-edf6-4d40-85e9-348b7bbe56e3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:17.456316531 +0000 UTC m=+34.510286004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert") pod "route-controller-manager-7fbb6944d8-sbv7k" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3") : secret "serving-cert" not found Mar 09 16:26:15.456746 master-0 kubenswrapper[7604]: I0309 16:26:15.456562 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:15.457204 master-0 kubenswrapper[7604]: E0309 16:26:15.456775 7604 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:15.457204 master-0 kubenswrapper[7604]: E0309 16:26:15.456881 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert podName:44c4c1a8-aa94-44b7-9f21-3a55a59dcb62 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:19.456856496 +0000 UTC m=+36.510825979 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert") pod "controller-manager-79b49b464c-hl85g" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62") : secret "serving-cert" not found Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: I0309 16:26:15.861447 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: E0309 16:26:15.861732 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: E0309 16:26:15.861822 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert podName:d15da434-241d-4a93-9ce3-f943d43bf2ce nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.861800297 +0000 UTC m=+64.915769780 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert") pod "catalog-operator-7d9c49f57b-hv8xl" (UID: "d15da434-241d-4a93-9ce3-f943d43bf2ce") : secret "catalog-operator-serving-cert" not found Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: I0309 16:26:15.862018 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: I0309 16:26:15.862114 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: I0309 16:26:15.862172 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: I0309 16:26:15.862195 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: I0309 16:26:15.862222 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: E0309 16:26:15.862268 7604 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: E0309 16:26:15.862470 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls podName:f606b775-bf22-4d64-abb4-8e0e24ddb5cd nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.862447545 +0000 UTC m=+64.916416968 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls") pod "ingress-operator-677db989d6-xtmhw" (UID: "f606b775-bf22-4d64-abb4-8e0e24ddb5cd") : secret "metrics-tls" not found Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: E0309 16:26:15.862629 7604 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 09 16:26:15.864568 master-0 kubenswrapper[7604]: E0309 16:26:15.862699 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics podName:5b9030c9-7f5f-4e54-ae93-140469e3558b nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.86265325 +0000 UTC m=+64.916622673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-vh6m4" (UID: "5b9030c9-7f5f-4e54-ae93-140469e3558b") : secret "marketplace-operator-metrics" not found Mar 09 16:26:15.868997 master-0 kubenswrapper[7604]: I0309 16:26:15.868725 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"cluster-version-operator-745944c6b7-pwnsk\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:26:15.870483 master-0 kubenswrapper[7604]: I0309 16:26:15.870413 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:26:15.873153 master-0 kubenswrapper[7604]: I0309 16:26:15.873083 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:26:15.877362 master-0 kubenswrapper[7604]: I0309 16:26:15.877287 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:26:15.889234 master-0 kubenswrapper[7604]: W0309 16:26:15.889178 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77a20946_c236_417e_8333_6d1aac88bbc2.slice/crio-8a9a6d38115c98f2a33ab555eccd9b2fe5937b945bc7ebcd3ee8e747b92f50a4 WatchSource:0}: Error finding container 8a9a6d38115c98f2a33ab555eccd9b2fe5937b945bc7ebcd3ee8e747b92f50a4: Status 404 returned error can't find the container with id 8a9a6d38115c98f2a33ab555eccd9b2fe5937b945bc7ebcd3ee8e747b92f50a4 Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.963924 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.963988 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964020 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964136 7604 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964197 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls podName:72739f4d-da25-493b-91ef-d2b64e9297dd nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.964178478 +0000 UTC m=+65.018147901 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls") pod "dns-operator-589895fbb7-6sknh" (UID: "72739f4d-da25-493b-91ef-d2b64e9297dd") : secret "metrics-tls" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964220 7604 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964304 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs podName:ef122f26-bfae-44d2-a70a-8507b3b47332 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.964284882 +0000 UTC m=+65.018254305 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs") pod "network-metrics-daemon-n7slb" (UID: "ef122f26-bfae-44d2-a70a-8507b3b47332") : secret "metrics-daemon-secret" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964442 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964559 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964581 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964599 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert podName:be86c85d-59b1-4279-8253-a998ca16cd4d nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.96458842 +0000 UTC m=+65.018558013 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert") pod "olm-operator-d64cfc9db-qtmrd" (UID: "be86c85d-59b1-4279-8253-a998ca16cd4d") : secret "olm-operator-serving-cert" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964634 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964693 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964715 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964728 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964759 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.964743255 +0000 UTC m=+65.018712688 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-webhook-server-cert" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: I0309 16:26:15.964784 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964863 7604 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964906 7604 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964944 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert podName:f965b971-7e9a-4513-8450-b2b527609bd6 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.96492679 +0000 UTC m=+65.018896373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-fqwtv" (UID: "f965b971-7e9a-4513-8450-b2b527609bd6") : secret "package-server-manager-serving-cert" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964963 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls podName:004d1e93-2345-4e62-902c-33f9dbb0f397 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.964953861 +0000 UTC m=+65.018923504 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-8lvt9" (UID: "004d1e93-2345-4e62-902c-33f9dbb0f397") : secret "cluster-monitoring-operator-tls" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.964987 7604 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 09 16:26:15.964993 master-0 kubenswrapper[7604]: E0309 16:26:15.965018 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs podName:4bd3c489-427c-4a47-b7b9-5d1611b9be12 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.965008682 +0000 UTC m=+65.018978285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs") pod "multus-admission-controller-8d675b596-g8n5t" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12") : secret "multus-admission-controller-secret" not found Mar 09 16:26:15.966927 master-0 kubenswrapper[7604]: E0309 16:26:15.965049 7604 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 09 16:26:15.966927 master-0 kubenswrapper[7604]: E0309 16:26:15.965075 7604 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 09 16:26:15.966927 master-0 kubenswrapper[7604]: E0309 16:26:15.965115 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls podName:2e765395-7c6b-4cba-9a5a-37ba888722bb nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.965106975 +0000 UTC m=+65.019076398 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-dd2j5" (UID: "2e765395-7c6b-4cba-9a5a-37ba888722bb") : secret "image-registry-operator-tls" not found Mar 09 16:26:15.966927 master-0 kubenswrapper[7604]: E0309 16:26:15.965206 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls podName:fa7f88a3-9845-49a3-a108-d524df592961 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:47.965162246 +0000 UTC m=+65.019131859 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-p27tf" (UID: "fa7f88a3-9845-49a3-a108-d524df592961") : secret "cluster-baremetal-operator-tls" not found Mar 09 16:26:16.171280 master-0 kubenswrapper[7604]: I0309 16:26:16.170121 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:26:16.355390 master-0 kubenswrapper[7604]: I0309 16:26:16.355299 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc"] Mar 09 16:26:16.367254 master-0 kubenswrapper[7604]: W0309 16:26:16.367138 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc732d23_37bc_41c2_9f9b_333ba517c1f8.slice/crio-90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d WatchSource:0}: Error finding container 90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d: Status 404 returned error can't find the container with id 90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d Mar 09 16:26:16.468999 master-0 kubenswrapper[7604]: I0309 16:26:16.468461 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" event={"ID":"77a20946-c236-417e-8333-6d1aac88bbc2","Type":"ContainerStarted","Data":"8a9a6d38115c98f2a33ab555eccd9b2fe5937b945bc7ebcd3ee8e747b92f50a4"} Mar 09 16:26:16.470916 master-0 kubenswrapper[7604]: I0309 16:26:16.470874 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" event={"ID":"dc732d23-37bc-41c2-9f9b-333ba517c1f8","Type":"ContainerStarted","Data":"90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d"} Mar 09 16:26:16.539514 master-0 kubenswrapper[7604]: I0309 16:26:16.535213 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 09 16:26:16.539514 master-0 kubenswrapper[7604]: I0309 16:26:16.535443 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="103a81df-6dfb-42d3-bc03-4391681c3e35" containerName="installer" containerID="cri-o://a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6" gracePeriod=30 Mar 09 16:26:17.487884 master-0 kubenswrapper[7604]: I0309 16:26:17.487810 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:17.489402 master-0 kubenswrapper[7604]: E0309 16:26:17.488029 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:17.489690 master-0 kubenswrapper[7604]: E0309 16:26:17.489471 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert podName:442202b9-edf6-4d40-85e9-348b7bbe56e3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:21.489444298 +0000 UTC m=+38.543413901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert") pod "route-controller-manager-7fbb6944d8-sbv7k" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3") : secret "serving-cert" not found Mar 09 16:26:18.474270 master-0 kubenswrapper[7604]: I0309 16:26:18.474172 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:26:18.945729 master-0 kubenswrapper[7604]: I0309 16:26:18.945350 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 09 16:26:18.946463 master-0 kubenswrapper[7604]: I0309 16:26:18.946406 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 09 16:26:18.946519 master-0 kubenswrapper[7604]: I0309 16:26:18.946475 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.008933 master-0 kubenswrapper[7604]: I0309 16:26:19.008878 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:19.009137 master-0 kubenswrapper[7604]: I0309 16:26:19.008962 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-var-lock\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.009137 master-0 kubenswrapper[7604]: I0309 16:26:19.009018 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/818888c7-e9f0-4818-930b-94f55bbc66ca-kube-api-access\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.009137 master-0 kubenswrapper[7604]: I0309 16:26:19.009036 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.009267 master-0 kubenswrapper[7604]: E0309 16:26:19.009177 7604 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 09 16:26:19.009267 master-0 kubenswrapper[7604]: E0309 16:26:19.009219 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert podName:8c93fb5d-373d-4473-99dd-50e4398bafbf nodeName:}" failed. No retries permitted until 2026-03-09 16:26:27.009204862 +0000 UTC m=+44.063174285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert") pod "apiserver-dc6bb954d-kxhv7" (UID: "8c93fb5d-373d-4473-99dd-50e4398bafbf") : secret "serving-cert" not found Mar 09 16:26:19.110445 master-0 kubenswrapper[7604]: I0309 16:26:19.110327 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-var-lock\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.110879 master-0 kubenswrapper[7604]: I0309 16:26:19.110758 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-var-lock\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.110941 master-0 kubenswrapper[7604]: I0309 16:26:19.110869 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/818888c7-e9f0-4818-930b-94f55bbc66ca-kube-api-access\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.111019 master-0 kubenswrapper[7604]: I0309 16:26:19.110987 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.113190 master-0 kubenswrapper[7604]: I0309 16:26:19.111195 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.144536 master-0 kubenswrapper[7604]: I0309 16:26:19.144459 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/818888c7-e9f0-4818-930b-94f55bbc66ca-kube-api-access\") pod \"installer-2-master-0\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.272371 master-0 kubenswrapper[7604]: I0309 16:26:19.272321 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:19.485618 master-0 kubenswrapper[7604]: I0309 16:26:19.485178 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" event={"ID":"77a20946-c236-417e-8333-6d1aac88bbc2","Type":"ContainerStarted","Data":"9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895"} Mar 09 16:26:19.516833 master-0 kubenswrapper[7604]: I0309 16:26:19.516785 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:19.525413 master-0 kubenswrapper[7604]: I0309 16:26:19.525319 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"controller-manager-79b49b464c-hl85g\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:19.596032 master-0 kubenswrapper[7604]: I0309 16:26:19.595965 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 09 16:26:19.603003 master-0 kubenswrapper[7604]: W0309 16:26:19.602949 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod818888c7_e9f0_4818_930b_94f55bbc66ca.slice/crio-5d52fff78d21eb99fb89b3481244aa8759ed14a038b5d668684fa2f6dddcc4e2 WatchSource:0}: Error finding container 5d52fff78d21eb99fb89b3481244aa8759ed14a038b5d668684fa2f6dddcc4e2: Status 404 returned error can't find the container with id 5d52fff78d21eb99fb89b3481244aa8759ed14a038b5d668684fa2f6dddcc4e2 Mar 09 16:26:19.738929 master-0 kubenswrapper[7604]: I0309 16:26:19.738826 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:19.889902 master-0 kubenswrapper[7604]: I0309 16:26:19.889859 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:26:20.270509 master-0 kubenswrapper[7604]: I0309 16:26:20.270400 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79b49b464c-hl85g"] Mar 09 16:26:20.278209 master-0 kubenswrapper[7604]: W0309 16:26:20.278160 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44c4c1a8_aa94_44b7_9f21_3a55a59dcb62.slice/crio-091d88a0bd722f6065bc9407b96946956019ef156fb465a31f7684b95746ba3c WatchSource:0}: Error finding container 091d88a0bd722f6065bc9407b96946956019ef156fb465a31f7684b95746ba3c: Status 404 returned error can't find the container with id 091d88a0bd722f6065bc9407b96946956019ef156fb465a31f7684b95746ba3c Mar 09 16:26:20.489755 master-0 kubenswrapper[7604]: I0309 16:26:20.489662 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"818888c7-e9f0-4818-930b-94f55bbc66ca","Type":"ContainerStarted","Data":"379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423"} Mar 09 16:26:20.489755 master-0 kubenswrapper[7604]: I0309 16:26:20.489703 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"818888c7-e9f0-4818-930b-94f55bbc66ca","Type":"ContainerStarted","Data":"5d52fff78d21eb99fb89b3481244aa8759ed14a038b5d668684fa2f6dddcc4e2"} Mar 09 16:26:20.493197 master-0 kubenswrapper[7604]: I0309 16:26:20.493139 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" event={"ID":"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62","Type":"ContainerStarted","Data":"091d88a0bd722f6065bc9407b96946956019ef156fb465a31f7684b95746ba3c"} Mar 09 16:26:20.495493 master-0 kubenswrapper[7604]: I0309 16:26:20.494550 7604 generic.go:334] "Generic (PLEG): container finished" podID="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" containerID="d307ffcff5003265477b28e1b3fffae55393c2ce9ccdbb4d1fcf4602c47a75a3" exitCode=0 Mar 09 16:26:20.495493 master-0 kubenswrapper[7604]: I0309 16:26:20.495448 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67495f79c-bcblv" event={"ID":"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8","Type":"ContainerDied","Data":"d307ffcff5003265477b28e1b3fffae55393c2ce9ccdbb4d1fcf4602c47a75a3"} Mar 09 16:26:20.825265 master-0 kubenswrapper[7604]: I0309 16:26:20.824816 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.824794457 podStartE2EDuration="2.824794457s" podCreationTimestamp="2026-03-09 16:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:20.823926022 +0000 UTC m=+37.877895445" watchObservedRunningTime="2026-03-09 16:26:20.824794457 +0000 UTC m=+37.878763880" Mar 09 16:26:21.502826 master-0 kubenswrapper[7604]: I0309 16:26:21.500955 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67495f79c-bcblv" event={"ID":"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8","Type":"ContainerStarted","Data":"2827d85bf419cc457c1b3dfe281b1b768d01e1eaab30eca76c9b1fa905c55ea1"} Mar 09 16:26:21.546496 master-0 kubenswrapper[7604]: I0309 16:26:21.546253 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:21.546496 master-0 kubenswrapper[7604]: E0309 16:26:21.546483 7604 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 09 16:26:21.546758 master-0 kubenswrapper[7604]: E0309 16:26:21.546540 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert podName:442202b9-edf6-4d40-85e9-348b7bbe56e3 nodeName:}" failed. No retries permitted until 2026-03-09 16:26:29.546521594 +0000 UTC m=+46.600491017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert") pod "route-controller-manager-7fbb6944d8-sbv7k" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3") : secret "serving-cert" not found Mar 09 16:26:23.513698 master-0 kubenswrapper[7604]: I0309 16:26:23.513416 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-67495f79c-bcblv" event={"ID":"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8","Type":"ContainerStarted","Data":"fa05c52547fb17ec790ea7e043b3ab8c4a235ae046904b474558e22c8ed4e332"} Mar 09 16:26:23.514668 master-0 kubenswrapper[7604]: I0309 16:26:23.514635 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" event={"ID":"dc732d23-37bc-41c2-9f9b-333ba517c1f8","Type":"ContainerStarted","Data":"25a7ab145b0763001053c074ce2286add5df023f3e9455ff678697bf2aec9346"} Mar 09 16:26:23.625457 master-0 kubenswrapper[7604]: I0309 16:26:23.623159 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:26:23.792590 master-0 kubenswrapper[7604]: I0309 16:26:23.792412 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-fllqb"] Mar 09 16:26:23.793156 master-0 kubenswrapper[7604]: I0309 16:26:23.793124 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.885798 master-0 kubenswrapper[7604]: I0309 16:26:23.885687 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.885798 master-0 kubenswrapper[7604]: I0309 16:26:23.885732 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-lib-modules\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.885798 master-0 kubenswrapper[7604]: I0309 16:26:23.885757 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-conf\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.885798 master-0 kubenswrapper[7604]: I0309 16:26:23.885795 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-sys\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.885841 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-modprobe-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.885878 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysconfig\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.885906 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-systemd\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.885969 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-run\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.885988 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-host\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.886028 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mr7t\" (UniqueName: \"kubernetes.io/projected/c76178f6-3f0b-4b7d-ad23-724b83e35120-kube-api-access-2mr7t\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.886056 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-kubernetes\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886083 master-0 kubenswrapper[7604]: I0309 16:26:23.886090 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-var-lib-kubelet\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886302 master-0 kubenswrapper[7604]: I0309 16:26:23.886114 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-tmp\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.886302 master-0 kubenswrapper[7604]: I0309 16:26:23.886153 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-tuned\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.987965 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-conf\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988032 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-sys\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988083 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-modprobe-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988120 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysconfig\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988150 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-systemd\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988210 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-run\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988242 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-host\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988264 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mr7t\" (UniqueName: \"kubernetes.io/projected/c76178f6-3f0b-4b7d-ad23-724b83e35120-kube-api-access-2mr7t\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988318 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-kubernetes\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988351 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-var-lib-kubelet\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988374 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-tmp\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988437 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-tuned\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988460 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.988813 master-0 kubenswrapper[7604]: I0309 16:26:23.988479 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-lib-modules\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.989662 master-0 kubenswrapper[7604]: I0309 16:26:23.989633 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-lib-modules\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.989865 master-0 kubenswrapper[7604]: I0309 16:26:23.989790 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-conf\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.989966 master-0 kubenswrapper[7604]: I0309 16:26:23.989946 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-sys\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.990044 master-0 kubenswrapper[7604]: I0309 16:26:23.990027 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-modprobe-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.990601 master-0 kubenswrapper[7604]: I0309 16:26:23.990550 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysconfig\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.990675 master-0 kubenswrapper[7604]: I0309 16:26:23.990606 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-systemd\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.990675 master-0 kubenswrapper[7604]: I0309 16:26:23.990646 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-run\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.990767 master-0 kubenswrapper[7604]: I0309 16:26:23.990679 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-host\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.992275 master-0 kubenswrapper[7604]: I0309 16:26:23.991245 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-kubernetes\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.992275 master-0 kubenswrapper[7604]: I0309 16:26:23.991348 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-var-lib-kubelet\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.992446 master-0 kubenswrapper[7604]: I0309 16:26:23.992306 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.998442 master-0 kubenswrapper[7604]: I0309 16:26:23.996913 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-tmp\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:23.998442 master-0 kubenswrapper[7604]: I0309 16:26:23.997086 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-tuned\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:24.011203 master-0 kubenswrapper[7604]: I0309 16:26:24.011171 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mr7t\" (UniqueName: \"kubernetes.io/projected/c76178f6-3f0b-4b7d-ad23-724b83e35120-kube-api-access-2mr7t\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:24.116953 master-0 kubenswrapper[7604]: I0309 16:26:24.115809 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:26:24.133515 master-0 kubenswrapper[7604]: W0309 16:26:24.133477 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76178f6_3f0b_4b7d_ad23_724b83e35120.slice/crio-4fd58fbc52e301ba91b715c959436d963dc0c6637a7bc981d37fb5487e3ffa92 WatchSource:0}: Error finding container 4fd58fbc52e301ba91b715c959436d963dc0c6637a7bc981d37fb5487e3ffa92: Status 404 returned error can't find the container with id 4fd58fbc52e301ba91b715c959436d963dc0c6637a7bc981d37fb5487e3ffa92 Mar 09 16:26:24.565760 master-0 kubenswrapper[7604]: I0309 16:26:24.565708 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-fllqb" event={"ID":"c76178f6-3f0b-4b7d-ad23-724b83e35120","Type":"ContainerStarted","Data":"3ac592ce45f8f38cb178c62a7d71d80c751d9cdaecde3ef4fc7a32834fa3658e"} Mar 09 16:26:24.565760 master-0 kubenswrapper[7604]: I0309 16:26:24.565756 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-fllqb" event={"ID":"c76178f6-3f0b-4b7d-ad23-724b83e35120","Type":"ContainerStarted","Data":"4fd58fbc52e301ba91b715c959436d963dc0c6637a7bc981d37fb5487e3ffa92"} Mar 09 16:26:24.616482 master-0 kubenswrapper[7604]: I0309 16:26:24.615563 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-67495f79c-bcblv" podStartSLOduration=10.857252014 podStartE2EDuration="15.615541292s" podCreationTimestamp="2026-03-09 16:26:09 +0000 UTC" firstStartedPulling="2026-03-09 16:26:14.350274372 +0000 UTC m=+31.404243805" lastFinishedPulling="2026-03-09 16:26:19.10856366 +0000 UTC m=+36.162533083" observedRunningTime="2026-03-09 16:26:24.614010618 +0000 UTC m=+41.667980061" watchObservedRunningTime="2026-03-09 16:26:24.615541292 +0000 UTC m=+41.669510725" Mar 09 16:26:24.647020 master-0 kubenswrapper[7604]: I0309 16:26:24.646922 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-fllqb" podStartSLOduration=1.6469003309999999 podStartE2EDuration="1.646900331s" podCreationTimestamp="2026-03-09 16:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:24.645917964 +0000 UTC m=+41.699887407" watchObservedRunningTime="2026-03-09 16:26:24.646900331 +0000 UTC m=+41.700869754" Mar 09 16:26:26.139018 master-0 kubenswrapper[7604]: I0309 16:26:26.138624 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 09 16:26:26.141112 master-0 kubenswrapper[7604]: I0309 16:26:26.139909 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="818888c7-e9f0-4818-930b-94f55bbc66ca" containerName="installer" containerID="cri-o://379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423" gracePeriod=30 Mar 09 16:26:26.576631 master-0 kubenswrapper[7604]: I0309 16:26:26.576585 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" event={"ID":"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62","Type":"ContainerStarted","Data":"ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7"} Mar 09 16:26:26.576842 master-0 kubenswrapper[7604]: I0309 16:26:26.576813 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:26.578496 master-0 kubenswrapper[7604]: I0309 16:26:26.578024 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_818888c7-e9f0-4818-930b-94f55bbc66ca/installer/0.log" Mar 09 16:26:26.578496 master-0 kubenswrapper[7604]: I0309 16:26:26.578095 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:26.578956 master-0 kubenswrapper[7604]: I0309 16:26:26.578898 7604 patch_prober.go:28] interesting pod/controller-manager-79b49b464c-hl85g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.40:8443/healthz\": dial tcp 10.128.0.40:8443: connect: connection refused" start-of-body= Mar 09 16:26:26.579104 master-0 kubenswrapper[7604]: I0309 16:26:26.578965 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" podUID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.40:8443/healthz\": dial tcp 10.128.0.40:8443: connect: connection refused" Mar 09 16:26:26.580171 master-0 kubenswrapper[7604]: I0309 16:26:26.580130 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_818888c7-e9f0-4818-930b-94f55bbc66ca/installer/0.log" Mar 09 16:26:26.580246 master-0 kubenswrapper[7604]: I0309 16:26:26.580218 7604 generic.go:334] "Generic (PLEG): container finished" podID="818888c7-e9f0-4818-930b-94f55bbc66ca" containerID="379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423" exitCode=1 Mar 09 16:26:26.580286 master-0 kubenswrapper[7604]: I0309 16:26:26.580270 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"818888c7-e9f0-4818-930b-94f55bbc66ca","Type":"ContainerDied","Data":"379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423"} Mar 09 16:26:26.580332 master-0 kubenswrapper[7604]: I0309 16:26:26.580313 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"818888c7-e9f0-4818-930b-94f55bbc66ca","Type":"ContainerDied","Data":"5d52fff78d21eb99fb89b3481244aa8759ed14a038b5d668684fa2f6dddcc4e2"} Mar 09 16:26:26.580368 master-0 kubenswrapper[7604]: I0309 16:26:26.580340 7604 scope.go:117] "RemoveContainer" containerID="379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423" Mar 09 16:26:26.592100 master-0 kubenswrapper[7604]: I0309 16:26:26.591996 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" podStartSLOduration=10.475970831 podStartE2EDuration="16.591978788s" podCreationTimestamp="2026-03-09 16:26:10 +0000 UTC" firstStartedPulling="2026-03-09 16:26:20.282093352 +0000 UTC m=+37.336062775" lastFinishedPulling="2026-03-09 16:26:26.398101309 +0000 UTC m=+43.452070732" observedRunningTime="2026-03-09 16:26:26.591036852 +0000 UTC m=+43.645006305" watchObservedRunningTime="2026-03-09 16:26:26.591978788 +0000 UTC m=+43.645948211" Mar 09 16:26:26.598858 master-0 kubenswrapper[7604]: I0309 16:26:26.598822 7604 scope.go:117] "RemoveContainer" containerID="379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423" Mar 09 16:26:26.605941 master-0 kubenswrapper[7604]: E0309 16:26:26.605890 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423\": container with ID starting with 379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423 not found: ID does not exist" containerID="379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423" Mar 09 16:26:26.606125 master-0 kubenswrapper[7604]: I0309 16:26:26.605946 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423"} err="failed to get container status \"379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423\": rpc error: code = NotFound desc = could not find container \"379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423\": container with ID starting with 379db437235a93b329529f34b069a0bbc4df534e541b5476ab096bccc150c423 not found: ID does not exist" Mar 09 16:26:26.722557 master-0 kubenswrapper[7604]: I0309 16:26:26.721675 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-kubelet-dir\") pod \"818888c7-e9f0-4818-930b-94f55bbc66ca\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " Mar 09 16:26:26.722557 master-0 kubenswrapper[7604]: I0309 16:26:26.721727 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-var-lock\") pod \"818888c7-e9f0-4818-930b-94f55bbc66ca\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " Mar 09 16:26:26.722557 master-0 kubenswrapper[7604]: I0309 16:26:26.721827 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/818888c7-e9f0-4818-930b-94f55bbc66ca-kube-api-access\") pod \"818888c7-e9f0-4818-930b-94f55bbc66ca\" (UID: \"818888c7-e9f0-4818-930b-94f55bbc66ca\") " Mar 09 16:26:26.722871 master-0 kubenswrapper[7604]: I0309 16:26:26.722819 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-var-lock" (OuterVolumeSpecName: "var-lock") pod "818888c7-e9f0-4818-930b-94f55bbc66ca" (UID: "818888c7-e9f0-4818-930b-94f55bbc66ca"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:26.722946 master-0 kubenswrapper[7604]: I0309 16:26:26.722893 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "818888c7-e9f0-4818-930b-94f55bbc66ca" (UID: "818888c7-e9f0-4818-930b-94f55bbc66ca"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:26.728351 master-0 kubenswrapper[7604]: I0309 16:26:26.728286 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818888c7-e9f0-4818-930b-94f55bbc66ca-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "818888c7-e9f0-4818-930b-94f55bbc66ca" (UID: "818888c7-e9f0-4818-930b-94f55bbc66ca"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:26.823180 master-0 kubenswrapper[7604]: I0309 16:26:26.823114 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/818888c7-e9f0-4818-930b-94f55bbc66ca-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:26.823180 master-0 kubenswrapper[7604]: I0309 16:26:26.823160 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:26.823180 master-0 kubenswrapper[7604]: I0309 16:26:26.823174 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/818888c7-e9f0-4818-930b-94f55bbc66ca-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:27.024843 master-0 kubenswrapper[7604]: I0309 16:26:27.024727 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:27.027665 master-0 kubenswrapper[7604]: I0309 16:26:27.027621 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:27.090965 master-0 kubenswrapper[7604]: I0309 16:26:27.090908 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:27.190413 master-0 kubenswrapper[7604]: I0309 16:26:27.190354 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 09 16:26:27.190957 master-0 kubenswrapper[7604]: E0309 16:26:27.190603 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818888c7-e9f0-4818-930b-94f55bbc66ca" containerName="installer" Mar 09 16:26:27.190957 master-0 kubenswrapper[7604]: I0309 16:26:27.190621 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="818888c7-e9f0-4818-930b-94f55bbc66ca" containerName="installer" Mar 09 16:26:27.190957 master-0 kubenswrapper[7604]: I0309 16:26:27.190704 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="818888c7-e9f0-4818-930b-94f55bbc66ca" containerName="installer" Mar 09 16:26:27.191199 master-0 kubenswrapper[7604]: I0309 16:26:27.191107 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.193810 master-0 kubenswrapper[7604]: I0309 16:26:27.193785 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 16:26:27.209766 master-0 kubenswrapper[7604]: I0309 16:26:27.209705 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 09 16:26:27.329623 master-0 kubenswrapper[7604]: I0309 16:26:27.329557 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.329623 master-0 kubenswrapper[7604]: I0309 16:26:27.329619 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.329863 master-0 kubenswrapper[7604]: I0309 16:26:27.329743 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-var-lock\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.431283 master-0 kubenswrapper[7604]: I0309 16:26:27.431200 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.431283 master-0 kubenswrapper[7604]: I0309 16:26:27.431256 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.431607 master-0 kubenswrapper[7604]: I0309 16:26:27.431447 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.431607 master-0 kubenswrapper[7604]: I0309 16:26:27.431525 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-var-lock\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.431723 master-0 kubenswrapper[7604]: I0309 16:26:27.431691 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-var-lock\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.450197 master-0 kubenswrapper[7604]: I0309 16:26:27.450133 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.530203 master-0 kubenswrapper[7604]: I0309 16:26:27.530133 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:26:27.574479 master-0 kubenswrapper[7604]: I0309 16:26:27.573569 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7"] Mar 09 16:26:27.582239 master-0 kubenswrapper[7604]: W0309 16:26:27.582196 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c93fb5d_373d_4473_99dd_50e4398bafbf.slice/crio-746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d WatchSource:0}: Error finding container 746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d: Status 404 returned error can't find the container with id 746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d Mar 09 16:26:27.590837 master-0 kubenswrapper[7604]: I0309 16:26:27.590326 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 09 16:26:27.595398 master-0 kubenswrapper[7604]: I0309 16:26:27.595323 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:27.655510 master-0 kubenswrapper[7604]: I0309 16:26:27.653181 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 09 16:26:27.659279 master-0 kubenswrapper[7604]: I0309 16:26:27.659240 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 09 16:26:27.988832 master-0 kubenswrapper[7604]: I0309 16:26:27.988785 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 09 16:26:27.996016 master-0 kubenswrapper[7604]: W0309 16:26:27.995958 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod963633a2_3f9d_4b82_9e53_d749fa52bf8e.slice/crio-7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee WatchSource:0}: Error finding container 7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee: Status 404 returned error can't find the container with id 7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee Mar 09 16:26:28.601576 master-0 kubenswrapper[7604]: I0309 16:26:28.601387 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"963633a2-3f9d-4b82-9e53-d749fa52bf8e","Type":"ContainerStarted","Data":"d41d86bd25e4bbee52e08006f2bc72adad98a14d24d258528deb873f333249a6"} Mar 09 16:26:28.601576 master-0 kubenswrapper[7604]: I0309 16:26:28.601469 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"963633a2-3f9d-4b82-9e53-d749fa52bf8e","Type":"ContainerStarted","Data":"7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee"} Mar 09 16:26:28.602464 master-0 kubenswrapper[7604]: I0309 16:26:28.602405 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" event={"ID":"8c93fb5d-373d-4473-99dd-50e4398bafbf","Type":"ContainerStarted","Data":"746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d"} Mar 09 16:26:28.729523 master-0 kubenswrapper[7604]: I0309 16:26:28.729473 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 09 16:26:28.732402 master-0 kubenswrapper[7604]: I0309 16:26:28.730260 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.732402 master-0 kubenswrapper[7604]: I0309 16:26:28.730696 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=1.730642316 podStartE2EDuration="1.730642316s" podCreationTimestamp="2026-03-09 16:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:28.724654108 +0000 UTC m=+45.778623551" watchObservedRunningTime="2026-03-09 16:26:28.730642316 +0000 UTC m=+45.784611759" Mar 09 16:26:28.732402 master-0 kubenswrapper[7604]: I0309 16:26:28.732379 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7kft5" Mar 09 16:26:28.734517 master-0 kubenswrapper[7604]: I0309 16:26:28.733079 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 09 16:26:28.754297 master-0 kubenswrapper[7604]: I0309 16:26:28.754256 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.754504 master-0 kubenswrapper[7604]: I0309 16:26:28.754321 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-var-lock\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.754504 master-0 kubenswrapper[7604]: I0309 16:26:28.754341 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.855304 master-0 kubenswrapper[7604]: I0309 16:26:28.855070 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.855304 master-0 kubenswrapper[7604]: I0309 16:26:28.855248 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-var-lock\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.855304 master-0 kubenswrapper[7604]: I0309 16:26:28.855276 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.855782 master-0 kubenswrapper[7604]: I0309 16:26:28.855415 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-var-lock\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.855782 master-0 kubenswrapper[7604]: I0309 16:26:28.855617 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:28.926408 master-0 kubenswrapper[7604]: I0309 16:26:28.925840 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 09 16:26:28.926408 master-0 kubenswrapper[7604]: I0309 16:26:28.926327 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 09 16:26:28.926408 master-0 kubenswrapper[7604]: I0309 16:26:28.926409 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:28.931551 master-0 kubenswrapper[7604]: I0309 16:26:28.928075 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cshl6" Mar 09 16:26:28.958272 master-0 kubenswrapper[7604]: I0309 16:26:28.958180 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8331cfd-d949-4967-a8d6-d40026ff92b7-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:28.958491 master-0 kubenswrapper[7604]: I0309 16:26:28.958321 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:28.958491 master-0 kubenswrapper[7604]: I0309 16:26:28.958383 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-var-lock\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.059333 master-0 kubenswrapper[7604]: I0309 16:26:29.059284 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8331cfd-d949-4967-a8d6-d40026ff92b7-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.059690 master-0 kubenswrapper[7604]: I0309 16:26:29.059613 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.059690 master-0 kubenswrapper[7604]: I0309 16:26:29.059652 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.059690 master-0 kubenswrapper[7604]: I0309 16:26:29.059685 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-var-lock\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.059791 master-0 kubenswrapper[7604]: I0309 16:26:29.059765 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-var-lock\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.117897 master-0 kubenswrapper[7604]: I0309 16:26:29.117793 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818888c7-e9f0-4818-930b-94f55bbc66ca" path="/var/lib/kubelet/pods/818888c7-e9f0-4818-930b-94f55bbc66ca/volumes" Mar 09 16:26:29.127913 master-0 kubenswrapper[7604]: I0309 16:26:29.127857 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:29.128720 master-0 kubenswrapper[7604]: I0309 16:26:29.128678 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:29.176388 master-0 kubenswrapper[7604]: I0309 16:26:29.175704 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 09 16:26:29.177205 master-0 kubenswrapper[7604]: I0309 16:26:29.177163 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:29.177205 master-0 kubenswrapper[7604]: I0309 16:26:29.177195 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:29.353068 master-0 kubenswrapper[7604]: I0309 16:26:29.352758 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:26:29.447241 master-0 kubenswrapper[7604]: I0309 16:26:29.447200 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8331cfd-d949-4967-a8d6-d40026ff92b7-kube-api-access\") pod \"installer-3-master-0\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.562189 master-0 kubenswrapper[7604]: I0309 16:26:29.561793 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:29.576908 master-0 kubenswrapper[7604]: I0309 16:26:29.576821 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:29.581060 master-0 kubenswrapper[7604]: I0309 16:26:29.580676 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"route-controller-manager-7fbb6944d8-sbv7k\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:29.619850 master-0 kubenswrapper[7604]: I0309 16:26:29.619788 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:26:29.741887 master-0 kubenswrapper[7604]: I0309 16:26:29.741607 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:30.630297 master-0 kubenswrapper[7604]: I0309 16:26:30.630210 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79b49b464c-hl85g"] Mar 09 16:26:30.630969 master-0 kubenswrapper[7604]: I0309 16:26:30.630629 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" podUID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" containerName="controller-manager" containerID="cri-o://ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7" gracePeriod=30 Mar 09 16:26:30.634477 master-0 kubenswrapper[7604]: I0309 16:26:30.634197 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 09 16:26:30.653158 master-0 kubenswrapper[7604]: I0309 16:26:30.635648 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 09 16:26:31.122960 master-0 kubenswrapper[7604]: I0309 16:26:31.122906 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k"] Mar 09 16:26:31.676514 master-0 kubenswrapper[7604]: W0309 16:26:31.676408 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1e5298b1_ccde_4c18_8cdb_f415a4842f75.slice/crio-3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78 WatchSource:0}: Error finding container 3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78: Status 404 returned error can't find the container with id 3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78 Mar 09 16:26:31.677713 master-0 kubenswrapper[7604]: W0309 16:26:31.677668 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc8331cfd_d949_4967_a8d6_d40026ff92b7.slice/crio-35912bd3bb7f1dba19b422d68c5dd737cce31334966ad44292f203bb10f0a0e4 WatchSource:0}: Error finding container 35912bd3bb7f1dba19b422d68c5dd737cce31334966ad44292f203bb10f0a0e4: Status 404 returned error can't find the container with id 35912bd3bb7f1dba19b422d68c5dd737cce31334966ad44292f203bb10f0a0e4 Mar 09 16:26:31.770000 master-0 kubenswrapper[7604]: I0309 16:26:31.769950 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k"] Mar 09 16:26:32.157248 master-0 kubenswrapper[7604]: I0309 16:26:32.157190 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:32.235070 master-0 kubenswrapper[7604]: I0309 16:26:32.234945 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rvhv\" (UniqueName: \"kubernetes.io/projected/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-kube-api-access-6rvhv\") pod \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " Mar 09 16:26:32.235070 master-0 kubenswrapper[7604]: I0309 16:26:32.235039 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-client-ca\") pod \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " Mar 09 16:26:32.235257 master-0 kubenswrapper[7604]: I0309 16:26:32.235081 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-proxy-ca-bundles\") pod \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " Mar 09 16:26:32.235257 master-0 kubenswrapper[7604]: I0309 16:26:32.235120 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-config\") pod \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " Mar 09 16:26:32.235257 master-0 kubenswrapper[7604]: I0309 16:26:32.235203 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") pod \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\" (UID: \"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62\") " Mar 09 16:26:32.235597 master-0 kubenswrapper[7604]: I0309 16:26:32.235561 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-client-ca" (OuterVolumeSpecName: "client-ca") pod "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:32.235760 master-0 kubenswrapper[7604]: I0309 16:26:32.235721 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-config" (OuterVolumeSpecName: "config") pod "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:32.235892 master-0 kubenswrapper[7604]: I0309 16:26:32.235866 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:32.238702 master-0 kubenswrapper[7604]: I0309 16:26:32.238634 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-kube-api-access-6rvhv" (OuterVolumeSpecName: "kube-api-access-6rvhv") pod "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62"). InnerVolumeSpecName "kube-api-access-6rvhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:32.239796 master-0 kubenswrapper[7604]: I0309 16:26:32.239729 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" (UID: "44c4c1a8-aa94-44b7-9f21-3a55a59dcb62"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:26:32.340450 master-0 kubenswrapper[7604]: I0309 16:26:32.339994 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:32.340450 master-0 kubenswrapper[7604]: I0309 16:26:32.340039 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rvhv\" (UniqueName: \"kubernetes.io/projected/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-kube-api-access-6rvhv\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:32.340450 master-0 kubenswrapper[7604]: I0309 16:26:32.340051 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:32.340450 master-0 kubenswrapper[7604]: I0309 16:26:32.340059 7604 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:32.340450 master-0 kubenswrapper[7604]: I0309 16:26:32.340068 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:32.629920 master-0 kubenswrapper[7604]: I0309 16:26:32.629883 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1e5298b1-ccde-4c18-8cdb-f415a4842f75","Type":"ContainerStarted","Data":"99a339c5f3968e16e82464c06f5f8bce759eee7e72f76870e9bcaf5b40dfae4f"} Mar 09 16:26:32.630052 master-0 kubenswrapper[7604]: I0309 16:26:32.629928 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1e5298b1-ccde-4c18-8cdb-f415a4842f75","Type":"ContainerStarted","Data":"3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78"} Mar 09 16:26:32.630957 master-0 kubenswrapper[7604]: I0309 16:26:32.630921 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c8331cfd-d949-4967-a8d6-d40026ff92b7","Type":"ContainerStarted","Data":"931dd8150895d0d2f69770cc11fdd2ec7c4212174a1ff10ed8dc68103147945d"} Mar 09 16:26:32.631097 master-0 kubenswrapper[7604]: I0309 16:26:32.630958 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c8331cfd-d949-4967-a8d6-d40026ff92b7","Type":"ContainerStarted","Data":"35912bd3bb7f1dba19b422d68c5dd737cce31334966ad44292f203bb10f0a0e4"} Mar 09 16:26:32.632163 master-0 kubenswrapper[7604]: I0309 16:26:32.632108 7604 generic.go:334] "Generic (PLEG): container finished" podID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" containerID="ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7" exitCode=0 Mar 09 16:26:32.632222 master-0 kubenswrapper[7604]: I0309 16:26:32.632192 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" event={"ID":"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62","Type":"ContainerDied","Data":"ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7"} Mar 09 16:26:32.632222 master-0 kubenswrapper[7604]: I0309 16:26:32.632205 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" Mar 09 16:26:32.632303 master-0 kubenswrapper[7604]: I0309 16:26:32.632222 7604 scope.go:117] "RemoveContainer" containerID="ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7" Mar 09 16:26:32.632352 master-0 kubenswrapper[7604]: I0309 16:26:32.632210 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79b49b464c-hl85g" event={"ID":"44c4c1a8-aa94-44b7-9f21-3a55a59dcb62","Type":"ContainerDied","Data":"091d88a0bd722f6065bc9407b96946956019ef156fb465a31f7684b95746ba3c"} Mar 09 16:26:32.633595 master-0 kubenswrapper[7604]: I0309 16:26:32.633564 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" event={"ID":"442202b9-edf6-4d40-85e9-348b7bbe56e3","Type":"ContainerStarted","Data":"cdd3f5a872eff01f14423ad70d5c7abb4484fec3e4b77d0f18c44f49fd9445bf"} Mar 09 16:26:32.657088 master-0 kubenswrapper[7604]: I0309 16:26:32.650611 7604 scope.go:117] "RemoveContainer" containerID="ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7" Mar 09 16:26:32.658577 master-0 kubenswrapper[7604]: E0309 16:26:32.658531 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7\": container with ID starting with ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7 not found: ID does not exist" containerID="ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7" Mar 09 16:26:32.658676 master-0 kubenswrapper[7604]: I0309 16:26:32.658586 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7"} err="failed to get container status \"ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7\": rpc error: code = NotFound desc = could not find container \"ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7\": container with ID starting with ffef80b96e677084c048e0c1fb4143d1f34f0199a32741f69c9fdbf59c2bd7b7 not found: ID does not exist" Mar 09 16:26:32.717406 master-0 kubenswrapper[7604]: I0309 16:26:32.717236 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=4.717210334 podStartE2EDuration="4.717210334s" podCreationTimestamp="2026-03-09 16:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:32.706230096 +0000 UTC m=+49.760199539" watchObservedRunningTime="2026-03-09 16:26:32.717210334 +0000 UTC m=+49.771179757" Mar 09 16:26:32.717889 master-0 kubenswrapper[7604]: I0309 16:26:32.717507 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d49b645c4-2hd5r"] Mar 09 16:26:32.717889 master-0 kubenswrapper[7604]: E0309 16:26:32.717698 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" containerName="controller-manager" Mar 09 16:26:32.717889 master-0 kubenswrapper[7604]: I0309 16:26:32.717713 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" containerName="controller-manager" Mar 09 16:26:32.717889 master-0 kubenswrapper[7604]: I0309 16:26:32.717862 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" containerName="controller-manager" Mar 09 16:26:32.718994 master-0 kubenswrapper[7604]: I0309 16:26:32.718315 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.720823 master-0 kubenswrapper[7604]: I0309 16:26:32.720805 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:26:32.721003 master-0 kubenswrapper[7604]: I0309 16:26:32.720924 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:26:32.721194 master-0 kubenswrapper[7604]: I0309 16:26:32.721181 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:26:32.721387 master-0 kubenswrapper[7604]: I0309 16:26:32.721373 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58glv" Mar 09 16:26:32.721832 master-0 kubenswrapper[7604]: I0309 16:26:32.721645 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:26:32.721832 master-0 kubenswrapper[7604]: I0309 16:26:32.721683 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:26:32.729613 master-0 kubenswrapper[7604]: I0309 16:26:32.729556 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:26:32.768086 master-0 kubenswrapper[7604]: I0309 16:26:32.768055 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d49b645c4-2hd5r"] Mar 09 16:26:32.855023 master-0 kubenswrapper[7604]: I0309 16:26:32.854873 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85vj\" (UniqueName: \"kubernetes.io/projected/7b7d1963-c3f0-42bc-8720-426927a37a47-kube-api-access-t85vj\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.855023 master-0 kubenswrapper[7604]: I0309 16:26:32.854951 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7d1963-c3f0-42bc-8720-426927a37a47-serving-cert\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.855023 master-0 kubenswrapper[7604]: I0309 16:26:32.854980 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-client-ca\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.855319 master-0 kubenswrapper[7604]: I0309 16:26:32.855133 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-proxy-ca-bundles\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.855319 master-0 kubenswrapper[7604]: I0309 16:26:32.855166 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-config\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.881791 master-0 kubenswrapper[7604]: I0309 16:26:32.881715 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79b49b464c-hl85g"] Mar 09 16:26:32.956578 master-0 kubenswrapper[7604]: I0309 16:26:32.956513 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-config\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.956578 master-0 kubenswrapper[7604]: I0309 16:26:32.956600 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85vj\" (UniqueName: \"kubernetes.io/projected/7b7d1963-c3f0-42bc-8720-426927a37a47-kube-api-access-t85vj\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.959024 master-0 kubenswrapper[7604]: I0309 16:26:32.957786 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7d1963-c3f0-42bc-8720-426927a37a47-serving-cert\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.959024 master-0 kubenswrapper[7604]: I0309 16:26:32.957856 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-client-ca\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.959024 master-0 kubenswrapper[7604]: I0309 16:26:32.958062 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-proxy-ca-bundles\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.959024 master-0 kubenswrapper[7604]: I0309 16:26:32.958636 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-client-ca\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.959179 master-0 kubenswrapper[7604]: I0309 16:26:32.959060 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-proxy-ca-bundles\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.959488 master-0 kubenswrapper[7604]: I0309 16:26:32.959449 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-config\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.961334 master-0 kubenswrapper[7604]: I0309 16:26:32.961292 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7d1963-c3f0-42bc-8720-426927a37a47-serving-cert\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:32.983500 master-0 kubenswrapper[7604]: I0309 16:26:32.981599 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79b49b464c-hl85g"] Mar 09 16:26:33.117221 master-0 kubenswrapper[7604]: I0309 16:26:33.117095 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c4c1a8-aa94-44b7-9f21-3a55a59dcb62" path="/var/lib/kubelet/pods/44c4c1a8-aa94-44b7-9f21-3a55a59dcb62/volumes" Mar 09 16:26:33.244833 master-0 kubenswrapper[7604]: I0309 16:26:33.244755 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85vj\" (UniqueName: \"kubernetes.io/projected/7b7d1963-c3f0-42bc-8720-426927a37a47-kube-api-access-t85vj\") pod \"controller-manager-6d49b645c4-2hd5r\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:33.274563 master-0 kubenswrapper[7604]: I0309 16:26:33.270322 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=5.27030327 podStartE2EDuration="5.27030327s" podCreationTimestamp="2026-03-09 16:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:33.269296172 +0000 UTC m=+50.323265615" watchObservedRunningTime="2026-03-09 16:26:33.27030327 +0000 UTC m=+50.324272693" Mar 09 16:26:33.337694 master-0 kubenswrapper[7604]: I0309 16:26:33.337654 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:33.639830 master-0 kubenswrapper[7604]: I0309 16:26:33.639770 7604 generic.go:334] "Generic (PLEG): container finished" podID="8c93fb5d-373d-4473-99dd-50e4398bafbf" containerID="2ddc6aee7d8d1006c27dca4fe5b21a0e258f10014a5a8ed340c294e3e6bda574" exitCode=0 Mar 09 16:26:33.640528 master-0 kubenswrapper[7604]: I0309 16:26:33.640498 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" event={"ID":"8c93fb5d-373d-4473-99dd-50e4398bafbf","Type":"ContainerDied","Data":"2ddc6aee7d8d1006c27dca4fe5b21a0e258f10014a5a8ed340c294e3e6bda574"} Mar 09 16:26:33.839723 master-0 kubenswrapper[7604]: I0309 16:26:33.839632 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d49b645c4-2hd5r"] Mar 09 16:26:34.645613 master-0 kubenswrapper[7604]: I0309 16:26:34.645566 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" event={"ID":"7b7d1963-c3f0-42bc-8720-426927a37a47","Type":"ContainerStarted","Data":"db3ce33d227af9c594dddc7530e159f986bfcc3583631b361184c95de3a6f124"} Mar 09 16:26:34.763719 master-0 kubenswrapper[7604]: I0309 16:26:34.763651 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk"] Mar 09 16:26:34.764139 master-0 kubenswrapper[7604]: I0309 16:26:34.763893 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" podUID="77a20946-c236-417e-8333-6d1aac88bbc2" containerName="cluster-version-operator" containerID="cri-o://9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895" gracePeriod=130 Mar 09 16:26:34.914276 master-0 kubenswrapper[7604]: I0309 16:26:34.914238 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:26:34.994445 master-0 kubenswrapper[7604]: I0309 16:26:34.993762 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") pod \"77a20946-c236-417e-8333-6d1aac88bbc2\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " Mar 09 16:26:34.994445 master-0 kubenswrapper[7604]: I0309 16:26:34.993833 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") pod \"77a20946-c236-417e-8333-6d1aac88bbc2\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " Mar 09 16:26:34.994445 master-0 kubenswrapper[7604]: I0309 16:26:34.993921 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") pod \"77a20946-c236-417e-8333-6d1aac88bbc2\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " Mar 09 16:26:34.994445 master-0 kubenswrapper[7604]: I0309 16:26:34.993970 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") pod \"77a20946-c236-417e-8333-6d1aac88bbc2\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " Mar 09 16:26:34.994445 master-0 kubenswrapper[7604]: I0309 16:26:34.993992 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") pod \"77a20946-c236-417e-8333-6d1aac88bbc2\" (UID: \"77a20946-c236-417e-8333-6d1aac88bbc2\") " Mar 09 16:26:34.994773 master-0 kubenswrapper[7604]: I0309 16:26:34.994535 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "77a20946-c236-417e-8333-6d1aac88bbc2" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:34.994773 master-0 kubenswrapper[7604]: I0309 16:26:34.994546 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "77a20946-c236-417e-8333-6d1aac88bbc2" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:34.998440 master-0 kubenswrapper[7604]: I0309 16:26:34.995266 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca" (OuterVolumeSpecName: "service-ca") pod "77a20946-c236-417e-8333-6d1aac88bbc2" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:35.003476 master-0 kubenswrapper[7604]: I0309 16:26:35.001736 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "77a20946-c236-417e-8333-6d1aac88bbc2" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:26:35.014298 master-0 kubenswrapper[7604]: I0309 16:26:35.014244 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "77a20946-c236-417e-8333-6d1aac88bbc2" (UID: "77a20946-c236-417e-8333-6d1aac88bbc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:35.095771 master-0 kubenswrapper[7604]: I0309 16:26:35.095631 7604 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:35.095771 master-0 kubenswrapper[7604]: I0309 16:26:35.095670 7604 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/77a20946-c236-417e-8333-6d1aac88bbc2-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:35.095771 master-0 kubenswrapper[7604]: I0309 16:26:35.095681 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a20946-c236-417e-8333-6d1aac88bbc2-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:35.095771 master-0 kubenswrapper[7604]: I0309 16:26:35.095691 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77a20946-c236-417e-8333-6d1aac88bbc2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:35.095771 master-0 kubenswrapper[7604]: I0309 16:26:35.095700 7604 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/77a20946-c236-417e-8333-6d1aac88bbc2-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:35.650696 master-0 kubenswrapper[7604]: I0309 16:26:35.650618 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" event={"ID":"7b7d1963-c3f0-42bc-8720-426927a37a47","Type":"ContainerStarted","Data":"15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d"} Mar 09 16:26:35.651228 master-0 kubenswrapper[7604]: I0309 16:26:35.651059 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:35.652679 master-0 kubenswrapper[7604]: I0309 16:26:35.652632 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" event={"ID":"442202b9-edf6-4d40-85e9-348b7bbe56e3","Type":"ContainerStarted","Data":"9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3"} Mar 09 16:26:35.652814 master-0 kubenswrapper[7604]: I0309 16:26:35.652735 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" podUID="442202b9-edf6-4d40-85e9-348b7bbe56e3" containerName="route-controller-manager" containerID="cri-o://9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3" gracePeriod=30 Mar 09 16:26:35.653669 master-0 kubenswrapper[7604]: I0309 16:26:35.652882 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:35.656075 master-0 kubenswrapper[7604]: I0309 16:26:35.655189 7604 generic.go:334] "Generic (PLEG): container finished" podID="77a20946-c236-417e-8333-6d1aac88bbc2" containerID="9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895" exitCode=0 Mar 09 16:26:35.656075 master-0 kubenswrapper[7604]: I0309 16:26:35.655238 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" event={"ID":"77a20946-c236-417e-8333-6d1aac88bbc2","Type":"ContainerDied","Data":"9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895"} Mar 09 16:26:35.656075 master-0 kubenswrapper[7604]: I0309 16:26:35.655257 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" event={"ID":"77a20946-c236-417e-8333-6d1aac88bbc2","Type":"ContainerDied","Data":"8a9a6d38115c98f2a33ab555eccd9b2fe5937b945bc7ebcd3ee8e747b92f50a4"} Mar 09 16:26:35.656075 master-0 kubenswrapper[7604]: I0309 16:26:35.655274 7604 scope.go:117] "RemoveContainer" containerID="9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895" Mar 09 16:26:35.656075 master-0 kubenswrapper[7604]: I0309 16:26:35.655371 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk" Mar 09 16:26:35.660600 master-0 kubenswrapper[7604]: I0309 16:26:35.660206 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" event={"ID":"8c93fb5d-373d-4473-99dd-50e4398bafbf","Type":"ContainerStarted","Data":"e976aa57a55599a007ac519f8c18f921f0b402e68a8ad32a2ffe34cce17ceff5"} Mar 09 16:26:35.660600 master-0 kubenswrapper[7604]: I0309 16:26:35.660308 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:26:35.665133 master-0 kubenswrapper[7604]: I0309 16:26:35.665080 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:35.673365 master-0 kubenswrapper[7604]: I0309 16:26:35.673307 7604 scope.go:117] "RemoveContainer" containerID="9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895" Mar 09 16:26:35.674160 master-0 kubenswrapper[7604]: E0309 16:26:35.674100 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895\": container with ID starting with 9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895 not found: ID does not exist" containerID="9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895" Mar 09 16:26:35.674238 master-0 kubenswrapper[7604]: I0309 16:26:35.674162 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895"} err="failed to get container status \"9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895\": rpc error: code = NotFound desc = could not find container \"9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895\": container with ID starting with 9cb538223ea090bfac34ace8924d69e0852d063dc5707486eecb0b1762bb1895 not found: ID does not exist" Mar 09 16:26:35.679760 master-0 kubenswrapper[7604]: I0309 16:26:35.679701 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" podStartSLOduration=4.679687552 podStartE2EDuration="4.679687552s" podCreationTimestamp="2026-03-09 16:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:35.677995596 +0000 UTC m=+52.731965029" watchObservedRunningTime="2026-03-09 16:26:35.679687552 +0000 UTC m=+52.733656985" Mar 09 16:26:35.704947 master-0 kubenswrapper[7604]: I0309 16:26:35.703969 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" podStartSLOduration=22.887359096 podStartE2EDuration="25.703950793s" podCreationTimestamp="2026-03-09 16:26:10 +0000 UTC" firstStartedPulling="2026-03-09 16:26:31.780130315 +0000 UTC m=+48.834099738" lastFinishedPulling="2026-03-09 16:26:34.596722022 +0000 UTC m=+51.650691435" observedRunningTime="2026-03-09 16:26:35.703008257 +0000 UTC m=+52.756977690" watchObservedRunningTime="2026-03-09 16:26:35.703950793 +0000 UTC m=+52.757920216" Mar 09 16:26:35.732839 master-0 kubenswrapper[7604]: I0309 16:26:35.731050 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk"] Mar 09 16:26:35.738802 master-0 kubenswrapper[7604]: I0309 16:26:35.738692 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-pwnsk"] Mar 09 16:26:35.851513 master-0 kubenswrapper[7604]: I0309 16:26:35.850805 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" podStartSLOduration=20.038110848 podStartE2EDuration="24.850783332s" podCreationTimestamp="2026-03-09 16:26:11 +0000 UTC" firstStartedPulling="2026-03-09 16:26:27.585707606 +0000 UTC m=+44.639677029" lastFinishedPulling="2026-03-09 16:26:32.39838009 +0000 UTC m=+49.452349513" observedRunningTime="2026-03-09 16:26:35.843097817 +0000 UTC m=+52.897067250" watchObservedRunningTime="2026-03-09 16:26:35.850783332 +0000 UTC m=+52.904752755" Mar 09 16:26:35.852396 master-0 kubenswrapper[7604]: I0309 16:26:35.852353 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf"] Mar 09 16:26:35.852612 master-0 kubenswrapper[7604]: E0309 16:26:35.852589 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a20946-c236-417e-8333-6d1aac88bbc2" containerName="cluster-version-operator" Mar 09 16:26:35.852612 master-0 kubenswrapper[7604]: I0309 16:26:35.852607 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a20946-c236-417e-8333-6d1aac88bbc2" containerName="cluster-version-operator" Mar 09 16:26:35.852713 master-0 kubenswrapper[7604]: I0309 16:26:35.852684 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a20946-c236-417e-8333-6d1aac88bbc2" containerName="cluster-version-operator" Mar 09 16:26:35.852999 master-0 kubenswrapper[7604]: I0309 16:26:35.852980 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:35.856483 master-0 kubenswrapper[7604]: I0309 16:26:35.856453 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 16:26:35.857320 master-0 kubenswrapper[7604]: I0309 16:26:35.857234 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-jtzms" Mar 09 16:26:35.857536 master-0 kubenswrapper[7604]: I0309 16:26:35.857488 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 16:26:35.857659 master-0 kubenswrapper[7604]: I0309 16:26:35.857640 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 16:26:35.912070 master-0 kubenswrapper[7604]: I0309 16:26:35.911396 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:35.912070 master-0 kubenswrapper[7604]: I0309 16:26:35.911560 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:35.912070 master-0 kubenswrapper[7604]: I0309 16:26:35.911676 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:35.912070 master-0 kubenswrapper[7604]: I0309 16:26:35.911721 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-service-ca\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:35.912070 master-0 kubenswrapper[7604]: I0309 16:26:35.911770 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.020385 master-0 kubenswrapper[7604]: I0309 16:26:36.019635 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-service-ca\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.020385 master-0 kubenswrapper[7604]: I0309 16:26:36.019716 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.020385 master-0 kubenswrapper[7604]: I0309 16:26:36.019764 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.020385 master-0 kubenswrapper[7604]: I0309 16:26:36.019813 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.020385 master-0 kubenswrapper[7604]: I0309 16:26:36.019869 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.020385 master-0 kubenswrapper[7604]: I0309 16:26:36.019952 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.021115 master-0 kubenswrapper[7604]: I0309 16:26:36.020728 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-service-ca\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.037122 master-0 kubenswrapper[7604]: I0309 16:26:36.021632 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.037122 master-0 kubenswrapper[7604]: I0309 16:26:36.030019 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.062450 master-0 kubenswrapper[7604]: I0309 16:26:36.058745 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.165763 master-0 kubenswrapper[7604]: I0309 16:26:36.165649 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:36.182749 master-0 kubenswrapper[7604]: I0309 16:26:36.182699 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:26:36.200893 master-0 kubenswrapper[7604]: W0309 16:26:36.200845 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf7dea5_9848_41f0_bf0b_ec70ec0380f1.slice/crio-f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5 WatchSource:0}: Error finding container f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5: Status 404 returned error can't find the container with id f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5 Mar 09 16:26:36.325769 master-0 kubenswrapper[7604]: I0309 16:26:36.325726 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-client-ca\") pod \"442202b9-edf6-4d40-85e9-348b7bbe56e3\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " Mar 09 16:26:36.325960 master-0 kubenswrapper[7604]: I0309 16:26:36.325825 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-config\") pod \"442202b9-edf6-4d40-85e9-348b7bbe56e3\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " Mar 09 16:26:36.325960 master-0 kubenswrapper[7604]: I0309 16:26:36.325851 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") pod \"442202b9-edf6-4d40-85e9-348b7bbe56e3\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " Mar 09 16:26:36.325960 master-0 kubenswrapper[7604]: I0309 16:26:36.325912 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5mf6\" (UniqueName: \"kubernetes.io/projected/442202b9-edf6-4d40-85e9-348b7bbe56e3-kube-api-access-h5mf6\") pod \"442202b9-edf6-4d40-85e9-348b7bbe56e3\" (UID: \"442202b9-edf6-4d40-85e9-348b7bbe56e3\") " Mar 09 16:26:36.326552 master-0 kubenswrapper[7604]: I0309 16:26:36.326480 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-client-ca" (OuterVolumeSpecName: "client-ca") pod "442202b9-edf6-4d40-85e9-348b7bbe56e3" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:36.326890 master-0 kubenswrapper[7604]: I0309 16:26:36.326848 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-config" (OuterVolumeSpecName: "config") pod "442202b9-edf6-4d40-85e9-348b7bbe56e3" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:26:36.328913 master-0 kubenswrapper[7604]: I0309 16:26:36.328889 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "442202b9-edf6-4d40-85e9-348b7bbe56e3" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:26:36.329113 master-0 kubenswrapper[7604]: I0309 16:26:36.329087 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442202b9-edf6-4d40-85e9-348b7bbe56e3-kube-api-access-h5mf6" (OuterVolumeSpecName: "kube-api-access-h5mf6") pod "442202b9-edf6-4d40-85e9-348b7bbe56e3" (UID: "442202b9-edf6-4d40-85e9-348b7bbe56e3"). InnerVolumeSpecName "kube-api-access-h5mf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:36.427815 master-0 kubenswrapper[7604]: I0309 16:26:36.427707 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:36.427815 master-0 kubenswrapper[7604]: I0309 16:26:36.427754 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442202b9-edf6-4d40-85e9-348b7bbe56e3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:36.427815 master-0 kubenswrapper[7604]: I0309 16:26:36.427768 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5mf6\" (UniqueName: \"kubernetes.io/projected/442202b9-edf6-4d40-85e9-348b7bbe56e3-kube-api-access-h5mf6\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:36.427815 master-0 kubenswrapper[7604]: I0309 16:26:36.427780 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/442202b9-edf6-4d40-85e9-348b7bbe56e3-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:36.668683 master-0 kubenswrapper[7604]: I0309 16:26:36.668622 7604 generic.go:334] "Generic (PLEG): container finished" podID="442202b9-edf6-4d40-85e9-348b7bbe56e3" containerID="9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3" exitCode=0 Mar 09 16:26:36.668683 master-0 kubenswrapper[7604]: I0309 16:26:36.668675 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" event={"ID":"442202b9-edf6-4d40-85e9-348b7bbe56e3","Type":"ContainerDied","Data":"9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3"} Mar 09 16:26:36.668958 master-0 kubenswrapper[7604]: I0309 16:26:36.668730 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" event={"ID":"442202b9-edf6-4d40-85e9-348b7bbe56e3","Type":"ContainerDied","Data":"cdd3f5a872eff01f14423ad70d5c7abb4484fec3e4b77d0f18c44f49fd9445bf"} Mar 09 16:26:36.668958 master-0 kubenswrapper[7604]: I0309 16:26:36.668724 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k" Mar 09 16:26:36.668958 master-0 kubenswrapper[7604]: I0309 16:26:36.668749 7604 scope.go:117] "RemoveContainer" containerID="9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3" Mar 09 16:26:36.670799 master-0 kubenswrapper[7604]: I0309 16:26:36.670759 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" event={"ID":"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1","Type":"ContainerStarted","Data":"cf28483378cea782ea700907bc68169878c403e836eb639a2889f087184ba71c"} Mar 09 16:26:36.670799 master-0 kubenswrapper[7604]: I0309 16:26:36.670793 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" event={"ID":"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1","Type":"ContainerStarted","Data":"f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5"} Mar 09 16:26:36.681516 master-0 kubenswrapper[7604]: I0309 16:26:36.681486 7604 scope.go:117] "RemoveContainer" containerID="9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3" Mar 09 16:26:36.681989 master-0 kubenswrapper[7604]: E0309 16:26:36.681939 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3\": container with ID starting with 9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3 not found: ID does not exist" containerID="9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3" Mar 09 16:26:36.682037 master-0 kubenswrapper[7604]: I0309 16:26:36.681997 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3"} err="failed to get container status \"9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3\": rpc error: code = NotFound desc = could not find container \"9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3\": container with ID starting with 9d81d6fd92280c6f018fd50a7ef22ab9eac744786fe0cc08e3345ddd120fb9c3 not found: ID does not exist" Mar 09 16:26:36.776048 master-0 kubenswrapper[7604]: I0309 16:26:36.776000 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k"] Mar 09 16:26:36.785404 master-0 kubenswrapper[7604]: I0309 16:26:36.785353 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fbb6944d8-sbv7k"] Mar 09 16:26:37.091328 master-0 kubenswrapper[7604]: I0309 16:26:37.091276 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:37.091328 master-0 kubenswrapper[7604]: I0309 16:26:37.091334 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:37.097054 master-0 kubenswrapper[7604]: I0309 16:26:37.097007 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:37.118788 master-0 kubenswrapper[7604]: I0309 16:26:37.118682 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442202b9-edf6-4d40-85e9-348b7bbe56e3" path="/var/lib/kubelet/pods/442202b9-edf6-4d40-85e9-348b7bbe56e3/volumes" Mar 09 16:26:37.119437 master-0 kubenswrapper[7604]: I0309 16:26:37.119376 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a20946-c236-417e-8333-6d1aac88bbc2" path="/var/lib/kubelet/pods/77a20946-c236-417e-8333-6d1aac88bbc2/volumes" Mar 09 16:26:37.138261 master-0 kubenswrapper[7604]: I0309 16:26:37.138165 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" podStartSLOduration=2.138134758 podStartE2EDuration="2.138134758s" podCreationTimestamp="2026-03-09 16:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:36.810612739 +0000 UTC m=+53.864582172" watchObservedRunningTime="2026-03-09 16:26:37.138134758 +0000 UTC m=+54.192104171" Mar 09 16:26:37.532384 master-0 kubenswrapper[7604]: I0309 16:26:37.532326 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 09 16:26:37.532662 master-0 kubenswrapper[7604]: I0309 16:26:37.532626 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="c8331cfd-d949-4967-a8d6-d40026ff92b7" containerName="installer" containerID="cri-o://931dd8150895d0d2f69770cc11fdd2ec7c4212174a1ff10ed8dc68103147945d" gracePeriod=30 Mar 09 16:26:37.678091 master-0 kubenswrapper[7604]: I0309 16:26:37.678051 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c8331cfd-d949-4967-a8d6-d40026ff92b7/installer/0.log" Mar 09 16:26:37.678324 master-0 kubenswrapper[7604]: I0309 16:26:37.678110 7604 generic.go:334] "Generic (PLEG): container finished" podID="c8331cfd-d949-4967-a8d6-d40026ff92b7" containerID="931dd8150895d0d2f69770cc11fdd2ec7c4212174a1ff10ed8dc68103147945d" exitCode=1 Mar 09 16:26:37.678324 master-0 kubenswrapper[7604]: I0309 16:26:37.678164 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c8331cfd-d949-4967-a8d6-d40026ff92b7","Type":"ContainerDied","Data":"931dd8150895d0d2f69770cc11fdd2ec7c4212174a1ff10ed8dc68103147945d"} Mar 09 16:26:37.685019 master-0 kubenswrapper[7604]: I0309 16:26:37.684933 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:26:37.968123 master-0 kubenswrapper[7604]: I0309 16:26:37.968055 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c8331cfd-d949-4967-a8d6-d40026ff92b7/installer/0.log" Mar 09 16:26:37.968123 master-0 kubenswrapper[7604]: I0309 16:26:37.968122 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:38.053811 master-0 kubenswrapper[7604]: I0309 16:26:38.053739 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-kubelet-dir\") pod \"c8331cfd-d949-4967-a8d6-d40026ff92b7\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " Mar 09 16:26:38.054097 master-0 kubenswrapper[7604]: I0309 16:26:38.053842 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8331cfd-d949-4967-a8d6-d40026ff92b7-kube-api-access\") pod \"c8331cfd-d949-4967-a8d6-d40026ff92b7\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " Mar 09 16:26:38.054097 master-0 kubenswrapper[7604]: I0309 16:26:38.053866 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-var-lock\") pod \"c8331cfd-d949-4967-a8d6-d40026ff92b7\" (UID: \"c8331cfd-d949-4967-a8d6-d40026ff92b7\") " Mar 09 16:26:38.054097 master-0 kubenswrapper[7604]: I0309 16:26:38.053977 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8331cfd-d949-4967-a8d6-d40026ff92b7" (UID: "c8331cfd-d949-4967-a8d6-d40026ff92b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:38.054249 master-0 kubenswrapper[7604]: I0309 16:26:38.054102 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-var-lock" (OuterVolumeSpecName: "var-lock") pod "c8331cfd-d949-4967-a8d6-d40026ff92b7" (UID: "c8331cfd-d949-4967-a8d6-d40026ff92b7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:38.054587 master-0 kubenswrapper[7604]: I0309 16:26:38.054532 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:38.054671 master-0 kubenswrapper[7604]: I0309 16:26:38.054585 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8331cfd-d949-4967-a8d6-d40026ff92b7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:38.056977 master-0 kubenswrapper[7604]: I0309 16:26:38.056938 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8331cfd-d949-4967-a8d6-d40026ff92b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8331cfd-d949-4967-a8d6-d40026ff92b7" (UID: "c8331cfd-d949-4967-a8d6-d40026ff92b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:38.155694 master-0 kubenswrapper[7604]: I0309 16:26:38.155647 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8331cfd-d949-4967-a8d6-d40026ff92b7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:38.683340 master-0 kubenswrapper[7604]: I0309 16:26:38.683285 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_c8331cfd-d949-4967-a8d6-d40026ff92b7/installer/0.log" Mar 09 16:26:38.683556 master-0 kubenswrapper[7604]: I0309 16:26:38.683395 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 09 16:26:38.683556 master-0 kubenswrapper[7604]: I0309 16:26:38.683442 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"c8331cfd-d949-4967-a8d6-d40026ff92b7","Type":"ContainerDied","Data":"35912bd3bb7f1dba19b422d68c5dd737cce31334966ad44292f203bb10f0a0e4"} Mar 09 16:26:38.683556 master-0 kubenswrapper[7604]: I0309 16:26:38.683478 7604 scope.go:117] "RemoveContainer" containerID="931dd8150895d0d2f69770cc11fdd2ec7c4212174a1ff10ed8dc68103147945d" Mar 09 16:26:38.717629 master-0 kubenswrapper[7604]: I0309 16:26:38.717564 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 09 16:26:38.725035 master-0 kubenswrapper[7604]: I0309 16:26:38.724909 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 09 16:26:39.123817 master-0 kubenswrapper[7604]: I0309 16:26:39.123694 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8331cfd-d949-4967-a8d6-d40026ff92b7" path="/var/lib/kubelet/pods/c8331cfd-d949-4967-a8d6-d40026ff92b7/volumes" Mar 09 16:26:39.675721 master-0 kubenswrapper[7604]: I0309 16:26:39.675651 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw"] Mar 09 16:26:39.676221 master-0 kubenswrapper[7604]: E0309 16:26:39.675846 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8331cfd-d949-4967-a8d6-d40026ff92b7" containerName="installer" Mar 09 16:26:39.676221 master-0 kubenswrapper[7604]: I0309 16:26:39.675859 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8331cfd-d949-4967-a8d6-d40026ff92b7" containerName="installer" Mar 09 16:26:39.676221 master-0 kubenswrapper[7604]: E0309 16:26:39.675867 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442202b9-edf6-4d40-85e9-348b7bbe56e3" containerName="route-controller-manager" Mar 09 16:26:39.676221 master-0 kubenswrapper[7604]: I0309 16:26:39.675873 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="442202b9-edf6-4d40-85e9-348b7bbe56e3" containerName="route-controller-manager" Mar 09 16:26:39.676221 master-0 kubenswrapper[7604]: I0309 16:26:39.675957 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8331cfd-d949-4967-a8d6-d40026ff92b7" containerName="installer" Mar 09 16:26:39.676221 master-0 kubenswrapper[7604]: I0309 16:26:39.675971 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="442202b9-edf6-4d40-85e9-348b7bbe56e3" containerName="route-controller-manager" Mar 09 16:26:39.676384 master-0 kubenswrapper[7604]: I0309 16:26:39.676278 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.678391 master-0 kubenswrapper[7604]: I0309 16:26:39.678358 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:26:39.678735 master-0 kubenswrapper[7604]: I0309 16:26:39.678698 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:26:39.678812 master-0 kubenswrapper[7604]: I0309 16:26:39.678782 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:26:39.678977 master-0 kubenswrapper[7604]: I0309 16:26:39.678938 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:26:39.680736 master-0 kubenswrapper[7604]: I0309 16:26:39.680695 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:26:39.691548 master-0 kubenswrapper[7604]: I0309 16:26:39.691505 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw"] Mar 09 16:26:39.746061 master-0 kubenswrapper[7604]: I0309 16:26:39.746016 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 09 16:26:39.746563 master-0 kubenswrapper[7604]: I0309 16:26:39.746535 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.751118 master-0 kubenswrapper[7604]: I0309 16:26:39.751068 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cshl6" Mar 09 16:26:39.762449 master-0 kubenswrapper[7604]: I0309 16:26:39.762388 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 09 16:26:39.778319 master-0 kubenswrapper[7604]: I0309 16:26:39.775998 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtt6\" (UniqueName: \"kubernetes.io/projected/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-kube-api-access-thtt6\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.778319 master-0 kubenswrapper[7604]: I0309 16:26:39.776153 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-config\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.778319 master-0 kubenswrapper[7604]: I0309 16:26:39.776203 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-client-ca\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.778319 master-0 kubenswrapper[7604]: I0309 16:26:39.776446 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-serving-cert\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.877690 master-0 kubenswrapper[7604]: I0309 16:26:39.877622 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtt6\" (UniqueName: \"kubernetes.io/projected/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-kube-api-access-thtt6\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.877894 master-0 kubenswrapper[7604]: I0309 16:26:39.877730 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-config\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.877894 master-0 kubenswrapper[7604]: I0309 16:26:39.877762 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-client-ca\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.877894 master-0 kubenswrapper[7604]: I0309 16:26:39.877801 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-var-lock\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.877894 master-0 kubenswrapper[7604]: I0309 16:26:39.877827 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.877894 master-0 kubenswrapper[7604]: I0309 16:26:39.877859 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.878031 master-0 kubenswrapper[7604]: I0309 16:26:39.877950 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-serving-cert\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.879290 master-0 kubenswrapper[7604]: I0309 16:26:39.879252 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-client-ca\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.879665 master-0 kubenswrapper[7604]: I0309 16:26:39.879638 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-config\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.881539 master-0 kubenswrapper[7604]: I0309 16:26:39.881512 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-serving-cert\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.893976 master-0 kubenswrapper[7604]: I0309 16:26:39.893917 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtt6\" (UniqueName: \"kubernetes.io/projected/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-kube-api-access-thtt6\") pod \"route-controller-manager-b6f88c7d8-qqvsw\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.979349 master-0 kubenswrapper[7604]: I0309 16:26:39.979279 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-var-lock\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.979349 master-0 kubenswrapper[7604]: I0309 16:26:39.979332 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.979349 master-0 kubenswrapper[7604]: I0309 16:26:39.979352 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.979681 master-0 kubenswrapper[7604]: I0309 16:26:39.979649 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.979833 master-0 kubenswrapper[7604]: I0309 16:26:39.979791 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-var-lock\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:39.991336 master-0 kubenswrapper[7604]: I0309 16:26:39.991256 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:39.997868 master-0 kubenswrapper[7604]: I0309 16:26:39.997792 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:40.060746 master-0 kubenswrapper[7604]: I0309 16:26:40.060673 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:26:40.368981 master-0 kubenswrapper[7604]: I0309 16:26:40.368779 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw"] Mar 09 16:26:40.378274 master-0 kubenswrapper[7604]: W0309 16:26:40.378207 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod067290d0_06ec_4bb5_8618_b7b52a8b6bb1.slice/crio-8c126274065003e766bcdae94018421423730b127196a7f83e555d62d1340c2b WatchSource:0}: Error finding container 8c126274065003e766bcdae94018421423730b127196a7f83e555d62d1340c2b: Status 404 returned error can't find the container with id 8c126274065003e766bcdae94018421423730b127196a7f83e555d62d1340c2b Mar 09 16:26:40.471155 master-0 kubenswrapper[7604]: I0309 16:26:40.471089 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 09 16:26:40.481744 master-0 kubenswrapper[7604]: W0309 16:26:40.481537 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6d95c7ed_e3ea_4383_b083_1df5df078f1c.slice/crio-a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39 WatchSource:0}: Error finding container a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39: Status 404 returned error can't find the container with id a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39 Mar 09 16:26:40.700674 master-0 kubenswrapper[7604]: I0309 16:26:40.700596 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"6d95c7ed-e3ea-4383-b083-1df5df078f1c","Type":"ContainerStarted","Data":"8de19850c9308d09c5cd12077a0d9362d507f0d6192f1e12c63ed63d09fea438"} Mar 09 16:26:40.700674 master-0 kubenswrapper[7604]: I0309 16:26:40.700671 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"6d95c7ed-e3ea-4383-b083-1df5df078f1c","Type":"ContainerStarted","Data":"a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39"} Mar 09 16:26:40.702931 master-0 kubenswrapper[7604]: I0309 16:26:40.702866 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" event={"ID":"067290d0-06ec-4bb5-8618-b7b52a8b6bb1","Type":"ContainerStarted","Data":"5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939"} Mar 09 16:26:40.703000 master-0 kubenswrapper[7604]: I0309 16:26:40.702938 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" event={"ID":"067290d0-06ec-4bb5-8618-b7b52a8b6bb1","Type":"ContainerStarted","Data":"8c126274065003e766bcdae94018421423730b127196a7f83e555d62d1340c2b"} Mar 09 16:26:40.703115 master-0 kubenswrapper[7604]: I0309 16:26:40.703085 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:40.717621 master-0 kubenswrapper[7604]: I0309 16:26:40.717538 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.7175127730000002 podStartE2EDuration="1.717512773s" podCreationTimestamp="2026-03-09 16:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:40.715545708 +0000 UTC m=+57.769515141" watchObservedRunningTime="2026-03-09 16:26:40.717512773 +0000 UTC m=+57.771482196" Mar 09 16:26:41.289080 master-0 kubenswrapper[7604]: I0309 16:26:41.289011 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:26:41.439326 master-0 kubenswrapper[7604]: I0309 16:26:41.439254 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" podStartSLOduration=10.439229969 podStartE2EDuration="10.439229969s" podCreationTimestamp="2026-03-09 16:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:26:40.739043006 +0000 UTC m=+57.793012439" watchObservedRunningTime="2026-03-09 16:26:41.439229969 +0000 UTC m=+58.493199392" Mar 09 16:26:41.758278 master-0 kubenswrapper[7604]: I0309 16:26:41.758222 7604 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 09 16:26:41.759202 master-0 kubenswrapper[7604]: I0309 16:26:41.759166 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e" gracePeriod=30 Mar 09 16:26:41.759479 master-0 kubenswrapper[7604]: I0309 16:26:41.759222 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093" gracePeriod=30 Mar 09 16:26:41.760554 master-0 kubenswrapper[7604]: I0309 16:26:41.760486 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:26:41.760873 master-0 kubenswrapper[7604]: E0309 16:26:41.760826 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 09 16:26:41.760873 master-0 kubenswrapper[7604]: I0309 16:26:41.760858 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 09 16:26:41.760873 master-0 kubenswrapper[7604]: E0309 16:26:41.760877 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 09 16:26:41.761011 master-0 kubenswrapper[7604]: I0309 16:26:41.760885 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 09 16:26:41.761055 master-0 kubenswrapper[7604]: I0309 16:26:41.761014 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 09 16:26:41.761055 master-0 kubenswrapper[7604]: I0309 16:26:41.761043 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 09 16:26:41.763014 master-0 kubenswrapper[7604]: I0309 16:26:41.762873 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 09 16:26:41.904068 master-0 kubenswrapper[7604]: I0309 16:26:41.903849 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:41.904438 master-0 kubenswrapper[7604]: I0309 16:26:41.904151 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:41.904438 master-0 kubenswrapper[7604]: I0309 16:26:41.904233 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:41.904438 master-0 kubenswrapper[7604]: I0309 16:26:41.904408 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:41.904438 master-0 kubenswrapper[7604]: I0309 16:26:41.904443 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:41.904623 master-0 kubenswrapper[7604]: I0309 16:26:41.904469 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.005569 master-0 kubenswrapper[7604]: I0309 16:26:42.005502 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006023 master-0 kubenswrapper[7604]: I0309 16:26:42.005701 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006092 master-0 kubenswrapper[7604]: I0309 16:26:42.005971 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006192 master-0 kubenswrapper[7604]: I0309 16:26:42.006176 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006251 master-0 kubenswrapper[7604]: I0309 16:26:42.006229 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006338 master-0 kubenswrapper[7604]: I0309 16:26:42.006325 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006416 master-0 kubenswrapper[7604]: I0309 16:26:42.006262 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006504 master-0 kubenswrapper[7604]: I0309 16:26:42.006467 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006556 master-0 kubenswrapper[7604]: I0309 16:26:42.006532 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006623 master-0 kubenswrapper[7604]: I0309 16:26:42.006401 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.006828 master-0 kubenswrapper[7604]: I0309 16:26:42.006812 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:42.007023 master-0 kubenswrapper[7604]: I0309 16:26:42.006848 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:26:43.486440 master-0 kubenswrapper[7604]: I0309 16:26:43.486084 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_103a81df-6dfb-42d3-bc03-4391681c3e35/installer/0.log" Mar 09 16:26:43.486440 master-0 kubenswrapper[7604]: I0309 16:26:43.486163 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:43.628270 master-0 kubenswrapper[7604]: I0309 16:26:43.628125 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-kubelet-dir\") pod \"103a81df-6dfb-42d3-bc03-4391681c3e35\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " Mar 09 16:26:43.628537 master-0 kubenswrapper[7604]: I0309 16:26:43.628288 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "103a81df-6dfb-42d3-bc03-4391681c3e35" (UID: "103a81df-6dfb-42d3-bc03-4391681c3e35"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:43.628740 master-0 kubenswrapper[7604]: I0309 16:26:43.628674 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103a81df-6dfb-42d3-bc03-4391681c3e35-kube-api-access\") pod \"103a81df-6dfb-42d3-bc03-4391681c3e35\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " Mar 09 16:26:43.629032 master-0 kubenswrapper[7604]: I0309 16:26:43.628796 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-var-lock\") pod \"103a81df-6dfb-42d3-bc03-4391681c3e35\" (UID: \"103a81df-6dfb-42d3-bc03-4391681c3e35\") " Mar 09 16:26:43.629350 master-0 kubenswrapper[7604]: I0309 16:26:43.629217 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-var-lock" (OuterVolumeSpecName: "var-lock") pod "103a81df-6dfb-42d3-bc03-4391681c3e35" (UID: "103a81df-6dfb-42d3-bc03-4391681c3e35"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:43.629350 master-0 kubenswrapper[7604]: I0309 16:26:43.629344 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:43.629487 master-0 kubenswrapper[7604]: I0309 16:26:43.629357 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/103a81df-6dfb-42d3-bc03-4391681c3e35-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:43.631597 master-0 kubenswrapper[7604]: I0309 16:26:43.631553 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103a81df-6dfb-42d3-bc03-4391681c3e35-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "103a81df-6dfb-42d3-bc03-4391681c3e35" (UID: "103a81df-6dfb-42d3-bc03-4391681c3e35"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:43.715578 master-0 kubenswrapper[7604]: I0309 16:26:43.715123 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_103a81df-6dfb-42d3-bc03-4391681c3e35/installer/0.log" Mar 09 16:26:43.715578 master-0 kubenswrapper[7604]: I0309 16:26:43.715176 7604 generic.go:334] "Generic (PLEG): container finished" podID="103a81df-6dfb-42d3-bc03-4391681c3e35" containerID="a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6" exitCode=1 Mar 09 16:26:43.715578 master-0 kubenswrapper[7604]: I0309 16:26:43.715211 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"103a81df-6dfb-42d3-bc03-4391681c3e35","Type":"ContainerDied","Data":"a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6"} Mar 09 16:26:43.715578 master-0 kubenswrapper[7604]: I0309 16:26:43.715238 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"103a81df-6dfb-42d3-bc03-4391681c3e35","Type":"ContainerDied","Data":"0e8cad4e52fb5c35bce0a53f2e1987cc8c806e677f3567f9d359feddc29333f6"} Mar 09 16:26:43.715578 master-0 kubenswrapper[7604]: I0309 16:26:43.715259 7604 scope.go:117] "RemoveContainer" containerID="a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6" Mar 09 16:26:43.715578 master-0 kubenswrapper[7604]: I0309 16:26:43.715269 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 09 16:26:43.726611 master-0 kubenswrapper[7604]: I0309 16:26:43.726567 7604 scope.go:117] "RemoveContainer" containerID="a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6" Mar 09 16:26:43.727161 master-0 kubenswrapper[7604]: E0309 16:26:43.727092 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6\": container with ID starting with a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6 not found: ID does not exist" containerID="a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6" Mar 09 16:26:43.727161 master-0 kubenswrapper[7604]: I0309 16:26:43.727129 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6"} err="failed to get container status \"a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6\": rpc error: code = NotFound desc = could not find container \"a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6\": container with ID starting with a3f2ab32daec9deab63a1933d3de075885b3a06e2eca00e8f709ee68ab727db6 not found: ID does not exist" Mar 09 16:26:43.730373 master-0 kubenswrapper[7604]: I0309 16:26:43.730352 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/103a81df-6dfb-42d3-bc03-4391681c3e35-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:47.888631 master-0 kubenswrapper[7604]: I0309 16:26:47.888545 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:26:47.889125 master-0 kubenswrapper[7604]: I0309 16:26:47.888694 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:26:47.889125 master-0 kubenswrapper[7604]: I0309 16:26:47.888743 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:26:47.891815 master-0 kubenswrapper[7604]: I0309 16:26:47.891768 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:26:47.891985 master-0 kubenswrapper[7604]: I0309 16:26:47.891931 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:26:47.892584 master-0 kubenswrapper[7604]: I0309 16:26:47.892548 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:26:47.983891 master-0 kubenswrapper[7604]: I0309 16:26:47.983808 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:26:47.984629 master-0 kubenswrapper[7604]: I0309 16:26:47.984567 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:26:47.985636 master-0 kubenswrapper[7604]: I0309 16:26:47.985273 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:26:47.989797 master-0 kubenswrapper[7604]: I0309 16:26:47.989753 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:26:47.989991 master-0 kubenswrapper[7604]: I0309 16:26:47.989802 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:47.989991 master-0 kubenswrapper[7604]: I0309 16:26:47.989838 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:26:47.989991 master-0 kubenswrapper[7604]: I0309 16:26:47.989868 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:47.989991 master-0 kubenswrapper[7604]: I0309 16:26:47.989899 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:26:47.990180 master-0 kubenswrapper[7604]: I0309 16:26:47.990043 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:26:47.990180 master-0 kubenswrapper[7604]: I0309 16:26:47.990100 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:26:47.990845 master-0 kubenswrapper[7604]: I0309 16:26:47.990320 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:26:47.990845 master-0 kubenswrapper[7604]: I0309 16:26:47.990359 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:26:47.993801 master-0 kubenswrapper[7604]: I0309 16:26:47.993749 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:47.993801 master-0 kubenswrapper[7604]: I0309 16:26:47.993757 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:47.994031 master-0 kubenswrapper[7604]: I0309 16:26:47.993978 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:26:47.994345 master-0 kubenswrapper[7604]: I0309 16:26:47.994292 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"multus-admission-controller-8d675b596-g8n5t\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:26:47.994345 master-0 kubenswrapper[7604]: I0309 16:26:47.994324 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:26:47.994345 master-0 kubenswrapper[7604]: I0309 16:26:47.994312 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:26:47.994516 master-0 kubenswrapper[7604]: I0309 16:26:47.994476 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:26:47.995833 master-0 kubenswrapper[7604]: I0309 16:26:47.995774 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:26:47.996565 master-0 kubenswrapper[7604]: I0309 16:26:47.996512 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:26:48.286191 master-0 kubenswrapper[7604]: I0309 16:26:48.286104 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:26:48.286457 master-0 kubenswrapper[7604]: I0309 16:26:48.286133 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:26:48.286961 master-0 kubenswrapper[7604]: I0309 16:26:48.286890 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:26:48.288016 master-0 kubenswrapper[7604]: I0309 16:26:48.287252 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:26:48.288016 master-0 kubenswrapper[7604]: I0309 16:26:48.287637 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:26:48.288016 master-0 kubenswrapper[7604]: I0309 16:26:48.288001 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:26:48.288884 master-0 kubenswrapper[7604]: I0309 16:26:48.288811 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:26:48.289393 master-0 kubenswrapper[7604]: I0309 16:26:48.289362 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:26:52.754329 master-0 kubenswrapper[7604]: I0309 16:26:52.754215 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nmvdk_d2d3c20a-f92e-433b-9fbc-b667b7bcf175/openshift-controller-manager-operator/0.log" Mar 09 16:26:52.754329 master-0 kubenswrapper[7604]: I0309 16:26:52.754284 7604 generic.go:334] "Generic (PLEG): container finished" podID="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" containerID="cc3b26ecc6db80d8920394a2785316da766a94e7ed17c29a0dba7776c2765c20" exitCode=1 Mar 09 16:26:52.754329 master-0 kubenswrapper[7604]: I0309 16:26:52.754323 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" event={"ID":"d2d3c20a-f92e-433b-9fbc-b667b7bcf175","Type":"ContainerDied","Data":"cc3b26ecc6db80d8920394a2785316da766a94e7ed17c29a0dba7776c2765c20"} Mar 09 16:26:52.755241 master-0 kubenswrapper[7604]: I0309 16:26:52.754769 7604 scope.go:117] "RemoveContainer" containerID="cc3b26ecc6db80d8920394a2785316da766a94e7ed17c29a0dba7776c2765c20" Mar 09 16:26:53.760785 master-0 kubenswrapper[7604]: I0309 16:26:53.760702 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nmvdk_d2d3c20a-f92e-433b-9fbc-b667b7bcf175/openshift-controller-manager-operator/0.log" Mar 09 16:26:53.760785 master-0 kubenswrapper[7604]: I0309 16:26:53.760769 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" event={"ID":"d2d3c20a-f92e-433b-9fbc-b667b7bcf175","Type":"ContainerStarted","Data":"b1c16a3899be6493dfcbe845944c02e0cb586d0232ff82e821db925b84a7b8fd"} Mar 09 16:26:54.766100 master-0 kubenswrapper[7604]: I0309 16:26:54.766055 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="d44b4281b666f32d8647c6a143f074eebe2a44e65e8dee2574808efbf233ffa9" exitCode=1 Mar 09 16:26:54.766534 master-0 kubenswrapper[7604]: I0309 16:26:54.766143 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"d44b4281b666f32d8647c6a143f074eebe2a44e65e8dee2574808efbf233ffa9"} Mar 09 16:26:54.766741 master-0 kubenswrapper[7604]: I0309 16:26:54.766716 7604 scope.go:117] "RemoveContainer" containerID="d44b4281b666f32d8647c6a143f074eebe2a44e65e8dee2574808efbf233ffa9" Mar 09 16:26:54.789675 master-0 kubenswrapper[7604]: E0309 16:26:54.789627 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 09 16:26:54.790370 master-0 kubenswrapper[7604]: I0309 16:26:54.790352 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 09 16:26:55.024956 master-0 kubenswrapper[7604]: E0309 16:26:55.023312 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:26:55.056653 master-0 kubenswrapper[7604]: I0309 16:26:55.056607 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:26:55.772640 master-0 kubenswrapper[7604]: I0309 16:26:55.772578 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"53d8447feebe6efb47509a21566ddef5d07e95379bceb527f12338aebfbdcef8"} Mar 09 16:26:56.780794 master-0 kubenswrapper[7604]: I0309 16:26:56.780752 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"8ec03662bfb689a4764f7edbb538732c79e6e42855becb27b7223236cfbfeaa7"} Mar 09 16:26:56.783187 master-0 kubenswrapper[7604]: I0309 16:26:56.783115 7604 generic.go:334] "Generic (PLEG): container finished" podID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerID="416bfbec5030b68d4b4837b781967c573c06ae0b5142f97eb8ad1a431a641798" exitCode=0 Mar 09 16:26:56.783672 master-0 kubenswrapper[7604]: I0309 16:26:56.783203 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"07aaf428-5040-4e75-9c0d-e092d0b2c2f3","Type":"ContainerDied","Data":"416bfbec5030b68d4b4837b781967c573c06ae0b5142f97eb8ad1a431a641798"} Mar 09 16:26:56.785130 master-0 kubenswrapper[7604]: I0309 16:26:56.785088 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="ac7dbd1722f48f03cc15a7ad9f7c4d79c749293927a88ba8bf73c146e69f9d3b" exitCode=0 Mar 09 16:26:56.785130 master-0 kubenswrapper[7604]: I0309 16:26:56.785127 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"ac7dbd1722f48f03cc15a7ad9f7c4d79c749293927a88ba8bf73c146e69f9d3b"} Mar 09 16:26:57.181551 master-0 kubenswrapper[7604]: I0309 16:26:57.181487 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:26:58.044557 master-0 kubenswrapper[7604]: I0309 16:26:58.044350 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:58.152581 master-0 kubenswrapper[7604]: I0309 16:26:58.152480 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-var-lock\") pod \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " Mar 09 16:26:58.152581 master-0 kubenswrapper[7604]: I0309 16:26:58.152592 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kubelet-dir\") pod \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " Mar 09 16:26:58.152952 master-0 kubenswrapper[7604]: I0309 16:26:58.152598 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-var-lock" (OuterVolumeSpecName: "var-lock") pod "07aaf428-5040-4e75-9c0d-e092d0b2c2f3" (UID: "07aaf428-5040-4e75-9c0d-e092d0b2c2f3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:58.152952 master-0 kubenswrapper[7604]: I0309 16:26:58.152628 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kube-api-access\") pod \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\" (UID: \"07aaf428-5040-4e75-9c0d-e092d0b2c2f3\") " Mar 09 16:26:58.152952 master-0 kubenswrapper[7604]: I0309 16:26:58.152659 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07aaf428-5040-4e75-9c0d-e092d0b2c2f3" (UID: "07aaf428-5040-4e75-9c0d-e092d0b2c2f3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:26:58.152952 master-0 kubenswrapper[7604]: I0309 16:26:58.152890 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:58.152952 master-0 kubenswrapper[7604]: I0309 16:26:58.152905 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:58.156229 master-0 kubenswrapper[7604]: I0309 16:26:58.156135 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07aaf428-5040-4e75-9c0d-e092d0b2c2f3" (UID: "07aaf428-5040-4e75-9c0d-e092d0b2c2f3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:26:58.254335 master-0 kubenswrapper[7604]: I0309 16:26:58.254218 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07aaf428-5040-4e75-9c0d-e092d0b2c2f3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:26:58.795204 master-0 kubenswrapper[7604]: I0309 16:26:58.795118 7604 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="e8fcbf086ed08a14966a423a93930e67c1cbd9793017fcea8581f23478898eea" exitCode=1 Mar 09 16:26:58.795612 master-0 kubenswrapper[7604]: I0309 16:26:58.795227 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"e8fcbf086ed08a14966a423a93930e67c1cbd9793017fcea8581f23478898eea"} Mar 09 16:26:58.797112 master-0 kubenswrapper[7604]: I0309 16:26:58.797086 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 09 16:26:58.797195 master-0 kubenswrapper[7604]: I0309 16:26:58.797090 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"07aaf428-5040-4e75-9c0d-e092d0b2c2f3","Type":"ContainerDied","Data":"9901d0aaf4b1546909e7fc4c6fcee79bdbe51cd6dd0be1d8dfa8048b9232cb38"} Mar 09 16:26:58.797195 master-0 kubenswrapper[7604]: I0309 16:26:58.797160 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9901d0aaf4b1546909e7fc4c6fcee79bdbe51cd6dd0be1d8dfa8048b9232cb38" Mar 09 16:26:58.797280 master-0 kubenswrapper[7604]: I0309 16:26:58.797215 7604 scope.go:117] "RemoveContainer" containerID="e8fcbf086ed08a14966a423a93930e67c1cbd9793017fcea8581f23478898eea" Mar 09 16:26:59.804831 master-0 kubenswrapper[7604]: I0309 16:26:59.804776 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239"} Mar 09 16:27:04.834524 master-0 kubenswrapper[7604]: E0309 16:27:04.834385 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:26:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:26:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:26:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:26:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:05.023653 master-0 kubenswrapper[7604]: E0309 16:27:05.023583 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:05.057758 master-0 kubenswrapper[7604]: I0309 16:27:05.057673 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:27:08.058183 master-0 kubenswrapper[7604]: I0309 16:27:08.058069 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:09.792482 master-0 kubenswrapper[7604]: E0309 16:27:09.792440 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 09 16:27:09.844996 master-0 kubenswrapper[7604]: I0309 16:27:09.844906 7604 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093" exitCode=0 Mar 09 16:27:10.850457 master-0 kubenswrapper[7604]: I0309 16:27:10.850389 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="77a3a31971fc786009b6ca6331ba76028043254e7a94076dc174933975c99fea" exitCode=0 Mar 09 16:27:10.851012 master-0 kubenswrapper[7604]: I0309 16:27:10.850455 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"77a3a31971fc786009b6ca6331ba76028043254e7a94076dc174933975c99fea"} Mar 09 16:27:11.854045 master-0 kubenswrapper[7604]: I0309 16:27:11.853989 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 09 16:27:11.854769 master-0 kubenswrapper[7604]: I0309 16:27:11.854081 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:27:11.856830 master-0 kubenswrapper[7604]: I0309 16:27:11.856654 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 09 16:27:11.856830 master-0 kubenswrapper[7604]: I0309 16:27:11.856701 7604 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e" exitCode=137 Mar 09 16:27:11.856830 master-0 kubenswrapper[7604]: I0309 16:27:11.856746 7604 scope.go:117] "RemoveContainer" containerID="1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093" Mar 09 16:27:11.868581 master-0 kubenswrapper[7604]: I0309 16:27:11.868277 7604 scope.go:117] "RemoveContainer" containerID="a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e" Mar 09 16:27:11.880755 master-0 kubenswrapper[7604]: I0309 16:27:11.880557 7604 scope.go:117] "RemoveContainer" containerID="1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093" Mar 09 16:27:11.880955 master-0 kubenswrapper[7604]: E0309 16:27:11.880890 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093\": container with ID starting with 1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093 not found: ID does not exist" containerID="1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093" Mar 09 16:27:11.880955 master-0 kubenswrapper[7604]: I0309 16:27:11.880922 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093"} err="failed to get container status \"1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093\": rpc error: code = NotFound desc = could not find container \"1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093\": container with ID starting with 1a9e98c09d1edb5cf4cc6a9d34e3506ae790dba5d4a84269d20fd8aeaa2c6093 not found: ID does not exist" Mar 09 16:27:11.880955 master-0 kubenswrapper[7604]: I0309 16:27:11.880943 7604 scope.go:117] "RemoveContainer" containerID="a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e" Mar 09 16:27:11.881223 master-0 kubenswrapper[7604]: E0309 16:27:11.881165 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e\": container with ID starting with a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e not found: ID does not exist" containerID="a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e" Mar 09 16:27:11.881223 master-0 kubenswrapper[7604]: I0309 16:27:11.881195 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e"} err="failed to get container status \"a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e\": rpc error: code = NotFound desc = could not find container \"a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e\": container with ID starting with a402b2ce345ae582ee278c5325eed3e19a15b5c3c70916dd9a08a2c884bc982e not found: ID does not exist" Mar 09 16:27:12.008897 master-0 kubenswrapper[7604]: I0309 16:27:12.008791 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 09 16:27:12.008897 master-0 kubenswrapper[7604]: I0309 16:27:12.008847 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 09 16:27:12.009220 master-0 kubenswrapper[7604]: I0309 16:27:12.008897 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:12.009220 master-0 kubenswrapper[7604]: I0309 16:27:12.009090 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:12.009220 master-0 kubenswrapper[7604]: I0309 16:27:12.009107 7604 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:12.110955 master-0 kubenswrapper[7604]: I0309 16:27:12.110841 7604 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:12.861642 master-0 kubenswrapper[7604]: I0309 16:27:12.861611 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:27:12.992084 master-0 kubenswrapper[7604]: I0309 16:27:12.992010 7604 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-6wlgj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 09 16:27:12.992292 master-0 kubenswrapper[7604]: I0309 16:27:12.992085 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" podUID="e2e38be5-1d33-4171-b27f-78a335f1590b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 09 16:27:13.118242 master-0 kubenswrapper[7604]: I0309 16:27:13.118120 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 09 16:27:13.118513 master-0 kubenswrapper[7604]: I0309 16:27:13.118492 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 09 16:27:13.867037 master-0 kubenswrapper[7604]: I0309 16:27:13.866965 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_963633a2-3f9d-4b82-9e53-d749fa52bf8e/installer/0.log" Mar 09 16:27:13.867690 master-0 kubenswrapper[7604]: I0309 16:27:13.867037 7604 generic.go:334] "Generic (PLEG): container finished" podID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerID="d41d86bd25e4bbee52e08006f2bc72adad98a14d24d258528deb873f333249a6" exitCode=1 Mar 09 16:27:14.836450 master-0 kubenswrapper[7604]: E0309 16:27:14.835800 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:15.025110 master-0 kubenswrapper[7604]: E0309 16:27:15.024748 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:15.800186 master-0 kubenswrapper[7604]: E0309 16:27:15.799962 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189b390bf17a94a7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:26:41.759188135 +0000 UTC m=+58.813157558,LastTimestamp:2026-03-09 16:26:41.759188135 +0000 UTC m=+58.813157558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:27:16.882776 master-0 kubenswrapper[7604]: I0309 16:27:16.882702 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1e5298b1-ccde-4c18-8cdb-f415a4842f75/installer/0.log" Mar 09 16:27:16.882776 master-0 kubenswrapper[7604]: I0309 16:27:16.882769 7604 generic.go:334] "Generic (PLEG): container finished" podID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerID="99a339c5f3968e16e82464c06f5f8bce759eee7e72f76870e9bcaf5b40dfae4f" exitCode=1 Mar 09 16:27:18.057080 master-0 kubenswrapper[7604]: I0309 16:27:18.056939 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:22.991677 master-0 kubenswrapper[7604]: I0309 16:27:22.991606 7604 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-6wlgj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 09 16:27:22.992258 master-0 kubenswrapper[7604]: I0309 16:27:22.991674 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" podUID="e2e38be5-1d33-4171-b27f-78a335f1590b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 09 16:27:23.857094 master-0 kubenswrapper[7604]: E0309 16:27:23.856974 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 09 16:27:24.837310 master-0 kubenswrapper[7604]: E0309 16:27:24.837061 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:24.916060 master-0 kubenswrapper[7604]: I0309 16:27:24.915993 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="35ea1971363594acb6e2af9ffc0246bb0a5c5f470f8d574da32d0f7bbc775968" exitCode=0 Mar 09 16:27:24.918073 master-0 kubenswrapper[7604]: I0309 16:27:24.918051 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-r82z7_5565c060-5952-4e85-8873-18bb80663924/network-operator/0.log" Mar 09 16:27:24.918227 master-0 kubenswrapper[7604]: I0309 16:27:24.918083 7604 generic.go:334] "Generic (PLEG): container finished" podID="5565c060-5952-4e85-8873-18bb80663924" containerID="a8d177dbb3aa3504d7da8194a33995b9c5590e73006f731e32a19254943a15e2" exitCode=255 Mar 09 16:27:25.025877 master-0 kubenswrapper[7604]: E0309 16:27:25.025821 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:28.058699 master-0 kubenswrapper[7604]: I0309 16:27:28.058472 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:30.942502 master-0 kubenswrapper[7604]: I0309 16:27:30.942407 7604 generic.go:334] "Generic (PLEG): container finished" podID="d6912539-9b06-4e2c-b6a8-155df31147f2" containerID="a517766120d5207dbc0746849224568d7e6239234bc628933b81ef9e4c5bff53" exitCode=0 Mar 09 16:27:30.944187 master-0 kubenswrapper[7604]: I0309 16:27:30.944152 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/0.log" Mar 09 16:27:30.944581 master-0 kubenswrapper[7604]: I0309 16:27:30.944553 7604 generic.go:334] "Generic (PLEG): container finished" podID="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" containerID="c33568491251a6cc29f433d394d9f99ae4624c6f4d925ee43ed4349c74f3003e" exitCode=1 Mar 09 16:27:32.992001 master-0 kubenswrapper[7604]: I0309 16:27:32.991933 7604 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-6wlgj container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 09 16:27:32.992001 master-0 kubenswrapper[7604]: I0309 16:27:32.991994 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" podUID="e2e38be5-1d33-4171-b27f-78a335f1590b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 09 16:27:34.837603 master-0 kubenswrapper[7604]: E0309 16:27:34.837526 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:35.026396 master-0 kubenswrapper[7604]: E0309 16:27:35.026321 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:35.026396 master-0 kubenswrapper[7604]: I0309 16:27:35.026382 7604 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 09 16:27:37.923122 master-0 kubenswrapper[7604]: E0309 16:27:37.923054 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 09 16:27:40.996573 master-0 kubenswrapper[7604]: I0309 16:27:40.996515 7604 generic.go:334] "Generic (PLEG): container finished" podID="34a4491c-12cc-4531-ad3e-246e93ed7842" containerID="fa5ddd5802e33c8a6619b86d4545b8a3364c98e851507c10917062099a64157c" exitCode=0 Mar 09 16:27:43.488195 master-0 kubenswrapper[7604]: I0309 16:27:43.488076 7604 status_manager.go:851] "Failed to get status for pod" podUID="103a81df-6dfb-42d3-bc03-4391681c3e35" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 09 16:27:44.839137 master-0 kubenswrapper[7604]: E0309 16:27:44.838538 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:27:44.839771 master-0 kubenswrapper[7604]: E0309 16:27:44.839312 7604 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 16:27:45.028108 master-0 kubenswrapper[7604]: E0309 16:27:45.027575 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 09 16:27:46.021322 master-0 kubenswrapper[7604]: I0309 16:27:46.021257 7604 generic.go:334] "Generic (PLEG): container finished" podID="e2e38be5-1d33-4171-b27f-78a335f1590b" containerID="aae9b4fa27818489ab82742a1d088f45fbd99626e96c87f0d251b8c8d0c8bce4" exitCode=0 Mar 09 16:27:47.120531 master-0 kubenswrapper[7604]: E0309 16:27:47.120417 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 09 16:27:47.121079 master-0 kubenswrapper[7604]: E0309 16:27:47.120687 7604 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Mar 09 16:27:47.121079 master-0 kubenswrapper[7604]: I0309 16:27:47.120726 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:27:47.122174 master-0 kubenswrapper[7604]: I0309 16:27:47.122119 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"8ec03662bfb689a4764f7edbb538732c79e6e42855becb27b7223236cfbfeaa7"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 09 16:27:47.122383 master-0 kubenswrapper[7604]: I0309 16:27:47.122341 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://8ec03662bfb689a4764f7edbb538732c79e6e42855becb27b7223236cfbfeaa7" gracePeriod=30 Mar 09 16:27:47.129806 master-0 kubenswrapper[7604]: I0309 16:27:47.129722 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 09 16:27:48.033135 master-0 kubenswrapper[7604]: I0309 16:27:48.033090 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="8ec03662bfb689a4764f7edbb538732c79e6e42855becb27b7223236cfbfeaa7" exitCode=2 Mar 09 16:27:48.581764 master-0 kubenswrapper[7604]: E0309 16:27:48.581720 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:48.581764 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-xtmhw_openshift-ingress-operator_f606b775-bf22-4d64-abb4-8e0e24ddb5cd_0(0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-xtmhw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4" Netns:"/var/run/netns/b7459feb-5349-44ba-b80a-3e5caad0143e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-xtmhw;K8S_POD_INFRA_CONTAINER_ID=0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4;K8S_POD_UID=f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw/f606b775-bf22-4d64-abb4-8e0e24ddb5cd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-xtmhw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:48.581764 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.581764 master-0 kubenswrapper[7604]: > Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: E0309 16:27:48.581802 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-xtmhw_openshift-ingress-operator_f606b775-bf22-4d64-abb4-8e0e24ddb5cd_0(0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-xtmhw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4" Netns:"/var/run/netns/b7459feb-5349-44ba-b80a-3e5caad0143e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-xtmhw;K8S_POD_INFRA_CONTAINER_ID=0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4;K8S_POD_UID=f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw/f606b775-bf22-4d64-abb4-8e0e24ddb5cd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-xtmhw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: > pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: E0309 16:27:48.581819 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-xtmhw_openshift-ingress-operator_f606b775-bf22-4d64-abb4-8e0e24ddb5cd_0(0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-xtmhw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4" Netns:"/var/run/netns/b7459feb-5349-44ba-b80a-3e5caad0143e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-xtmhw;K8S_POD_INFRA_CONTAINER_ID=0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4;K8S_POD_UID=f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw/f606b775-bf22-4d64-abb4-8e0e24ddb5cd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-xtmhw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: > pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:27:48.582253 master-0 kubenswrapper[7604]: E0309 16:27:48.581888 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-677db989d6-xtmhw_openshift-ingress-operator_f606b775-bf22-4d64-abb4-8e0e24ddb5cd_0(0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4): error adding pod openshift-ingress-operator_ingress-operator-677db989d6-xtmhw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4\\\" Netns:\\\"/var/run/netns/b7459feb-5349-44ba-b80a-3e5caad0143e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-677db989d6-xtmhw;K8S_POD_INFRA_CONTAINER_ID=0ec7c4f91a2ec1ab26e3ccdf4a3a03bbc0888659f6db24b632d4e6668b4537b4;K8S_POD_UID=f606b775-bf22-4d64-abb4-8e0e24ddb5cd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw] networking: Multus: [openshift-ingress-operator/ingress-operator-677db989d6-xtmhw/f606b775-bf22-4d64-abb4-8e0e24ddb5cd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-677db989d6-xtmhw in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-677db989d6-xtmhw?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:27:48.655882 master-0 kubenswrapper[7604]: E0309 16:27:48.655737 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:48.655882 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-hv8xl_openshift-operator-lifecycle-manager_d15da434-241d-4a93-9ce3-f943d43bf2ce_0(deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-hv8xl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e" Netns:"/var/run/netns/91aa55b8-dd8a-4ee9-8474-dd85d6654e43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-hv8xl;K8S_POD_INFRA_CONTAINER_ID=deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e;K8S_POD_UID=d15da434-241d-4a93-9ce3-f943d43bf2ce" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl/d15da434-241d-4a93-9ce3-f943d43bf2ce]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods catalog-operator-7d9c49f57b-hv8xl) Mar 09 16:27:48.655882 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.655882 master-0 kubenswrapper[7604]: > Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: E0309 16:27:48.655877 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-hv8xl_openshift-operator-lifecycle-manager_d15da434-241d-4a93-9ce3-f943d43bf2ce_0(deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-hv8xl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e" Netns:"/var/run/netns/91aa55b8-dd8a-4ee9-8474-dd85d6654e43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-hv8xl;K8S_POD_INFRA_CONTAINER_ID=deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e;K8S_POD_UID=d15da434-241d-4a93-9ce3-f943d43bf2ce" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl/d15da434-241d-4a93-9ce3-f943d43bf2ce]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods catalog-operator-7d9c49f57b-hv8xl) Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: > pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: E0309 16:27:48.655921 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-hv8xl_openshift-operator-lifecycle-manager_d15da434-241d-4a93-9ce3-f943d43bf2ce_0(deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-hv8xl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e" Netns:"/var/run/netns/91aa55b8-dd8a-4ee9-8474-dd85d6654e43" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-hv8xl;K8S_POD_INFRA_CONTAINER_ID=deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e;K8S_POD_UID=d15da434-241d-4a93-9ce3-f943d43bf2ce" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl/d15da434-241d-4a93-9ce3-f943d43bf2ce]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods catalog-operator-7d9c49f57b-hv8xl) Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: > pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:27:48.656087 master-0 kubenswrapper[7604]: E0309 16:27:48.656019 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-7d9c49f57b-hv8xl_openshift-operator-lifecycle-manager(d15da434-241d-4a93-9ce3-f943d43bf2ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-7d9c49f57b-hv8xl_openshift-operator-lifecycle-manager(d15da434-241d-4a93-9ce3-f943d43bf2ce)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-7d9c49f57b-hv8xl_openshift-operator-lifecycle-manager_d15da434-241d-4a93-9ce3-f943d43bf2ce_0(deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e): error adding pod openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-hv8xl to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e\\\" Netns:\\\"/var/run/netns/91aa55b8-dd8a-4ee9-8474-dd85d6654e43\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-7d9c49f57b-hv8xl;K8S_POD_INFRA_CONTAINER_ID=deb5fa98087766225539be9d723cffc65ab1bf41d477aa1a6b6e051442bf6f1e;K8S_POD_UID=d15da434-241d-4a93-9ce3-f943d43bf2ce\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl/d15da434-241d-4a93-9ce3-f943d43bf2ce]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-7d9c49f57b-hv8xl in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods catalog-operator-7d9c49f57b-hv8xl)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" podUID="d15da434-241d-4a93-9ce3-f943d43bf2ce" Mar 09 16:27:48.742785 master-0 kubenswrapper[7604]: E0309 16:27:48.742695 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:48.742785 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-vh6m4_openshift-marketplace_5b9030c9-7f5f-4e54-ae93-140469e3558b_0(806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-vh6m4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95" Netns:"/var/run/netns/00cc22db-d1e1-433f-b8b4-e9824975851e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-vh6m4;K8S_POD_INFRA_CONTAINER_ID=806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95;K8S_POD_UID=5b9030c9-7f5f-4e54-ae93-140469e3558b" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4/5b9030c9-7f5f-4e54-ae93-140469e3558b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-vh6m4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:48.742785 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.742785 master-0 kubenswrapper[7604]: > Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: E0309 16:27:48.742833 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-vh6m4_openshift-marketplace_5b9030c9-7f5f-4e54-ae93-140469e3558b_0(806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-vh6m4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95" Netns:"/var/run/netns/00cc22db-d1e1-433f-b8b4-e9824975851e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-vh6m4;K8S_POD_INFRA_CONTAINER_ID=806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95;K8S_POD_UID=5b9030c9-7f5f-4e54-ae93-140469e3558b" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4/5b9030c9-7f5f-4e54-ae93-140469e3558b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-vh6m4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: > pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: E0309 16:27:48.742879 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-vh6m4_openshift-marketplace_5b9030c9-7f5f-4e54-ae93-140469e3558b_0(806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-vh6m4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95" Netns:"/var/run/netns/00cc22db-d1e1-433f-b8b4-e9824975851e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-vh6m4;K8S_POD_INFRA_CONTAINER_ID=806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95;K8S_POD_UID=5b9030c9-7f5f-4e54-ae93-140469e3558b" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4/5b9030c9-7f5f-4e54-ae93-140469e3558b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-vh6m4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: > pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:27:48.743075 master-0 kubenswrapper[7604]: E0309 16:27:48.743002 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-64bf9778cb-vh6m4_openshift-marketplace(5b9030c9-7f5f-4e54-ae93-140469e3558b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-64bf9778cb-vh6m4_openshift-marketplace(5b9030c9-7f5f-4e54-ae93-140469e3558b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-64bf9778cb-vh6m4_openshift-marketplace_5b9030c9-7f5f-4e54-ae93-140469e3558b_0(806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95): error adding pod openshift-marketplace_marketplace-operator-64bf9778cb-vh6m4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95\\\" Netns:\\\"/var/run/netns/00cc22db-d1e1-433f-b8b4-e9824975851e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-64bf9778cb-vh6m4;K8S_POD_INFRA_CONTAINER_ID=806ccc52deae163f6c943b210468dc2b8df17d70b05708e98470925c9e081a95;K8S_POD_UID=5b9030c9-7f5f-4e54-ae93-140469e3558b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4] networking: Multus: [openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4/5b9030c9-7f5f-4e54-ae93-140469e3558b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-64bf9778cb-vh6m4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-64bf9778cb-vh6m4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" podUID="5b9030c9-7f5f-4e54-ae93-140469e3558b" Mar 09 16:27:49.000398 master-0 kubenswrapper[7604]: E0309 16:27:49.000338 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.000398 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-86d6d77c7c-dd2j5_openshift-image-registry_2e765395-7c6b-4cba-9a5a-37ba888722bb_0(9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd): error adding pod openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-dd2j5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd" Netns:"/var/run/netns/e58c4236-4957-4e59-ba54-48004e35f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-86d6d77c7c-dd2j5;K8S_POD_INFRA_CONTAINER_ID=9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd;K8S_POD_UID=2e765395-7c6b-4cba-9a5a-37ba888722bb" Path:"" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5/2e765395-7c6b-4cba-9a5a-37ba888722bb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-86d6d77c7c-dd2j5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.000398 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.000398 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: E0309 16:27:49.000435 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-86d6d77c7c-dd2j5_openshift-image-registry_2e765395-7c6b-4cba-9a5a-37ba888722bb_0(9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd): error adding pod openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-dd2j5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd" Netns:"/var/run/netns/e58c4236-4957-4e59-ba54-48004e35f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-86d6d77c7c-dd2j5;K8S_POD_INFRA_CONTAINER_ID=9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd;K8S_POD_UID=2e765395-7c6b-4cba-9a5a-37ba888722bb" Path:"" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5/2e765395-7c6b-4cba-9a5a-37ba888722bb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-86d6d77c7c-dd2j5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: > pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: E0309 16:27:49.000458 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-86d6d77c7c-dd2j5_openshift-image-registry_2e765395-7c6b-4cba-9a5a-37ba888722bb_0(9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd): error adding pod openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-dd2j5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd" Netns:"/var/run/netns/e58c4236-4957-4e59-ba54-48004e35f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-86d6d77c7c-dd2j5;K8S_POD_INFRA_CONTAINER_ID=9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd;K8S_POD_UID=2e765395-7c6b-4cba-9a5a-37ba888722bb" Path:"" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5/2e765395-7c6b-4cba-9a5a-37ba888722bb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-86d6d77c7c-dd2j5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: > pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:27:49.000611 master-0 kubenswrapper[7604]: E0309 16:27:49.000537 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-image-registry-operator-86d6d77c7c-dd2j5_openshift-image-registry(2e765395-7c6b-4cba-9a5a-37ba888722bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-image-registry-operator-86d6d77c7c-dd2j5_openshift-image-registry(2e765395-7c6b-4cba-9a5a-37ba888722bb)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-86d6d77c7c-dd2j5_openshift-image-registry_2e765395-7c6b-4cba-9a5a-37ba888722bb_0(9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd): error adding pod openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-dd2j5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd\\\" Netns:\\\"/var/run/netns/e58c4236-4957-4e59-ba54-48004e35f129\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-86d6d77c7c-dd2j5;K8S_POD_INFRA_CONTAINER_ID=9b5c4ca986d2536ede1e6bd5b38fd206ab1ebd48d02397e580266228d3185dfd;K8S_POD_UID=2e765395-7c6b-4cba-9a5a-37ba888722bb\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5/2e765395-7c6b-4cba-9a5a-37ba888722bb]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-86d6d77c7c-dd2j5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-86d6d77c7c-dd2j5?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" podUID="2e765395-7c6b-4cba-9a5a-37ba888722bb" Mar 09 16:27:49.156173 master-0 kubenswrapper[7604]: E0309 16:27:49.156131 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.156173 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-fqwtv_openshift-operator-lifecycle-manager_f965b971-7e9a-4513-8450-b2b527609bd6_0(41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b" Netns:"/var/run/netns/3a0a4ff0-1a49-4f82-ad4d-e9a9c82be224" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-fqwtv;K8S_POD_INFRA_CONTAINER_ID=41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b;K8S_POD_UID=f965b971-7e9a-4513-8450-b2b527609bd6" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv/f965b971-7e9a-4513-8450-b2b527609bd6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-fqwtv?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.156173 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.156173 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: E0309 16:27:49.156264 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-fqwtv_openshift-operator-lifecycle-manager_f965b971-7e9a-4513-8450-b2b527609bd6_0(41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b" Netns:"/var/run/netns/3a0a4ff0-1a49-4f82-ad4d-e9a9c82be224" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-fqwtv;K8S_POD_INFRA_CONTAINER_ID=41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b;K8S_POD_UID=f965b971-7e9a-4513-8450-b2b527609bd6" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv/f965b971-7e9a-4513-8450-b2b527609bd6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-fqwtv?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: > pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: E0309 16:27:49.156293 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-fqwtv_openshift-operator-lifecycle-manager_f965b971-7e9a-4513-8450-b2b527609bd6_0(41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b" Netns:"/var/run/netns/3a0a4ff0-1a49-4f82-ad4d-e9a9c82be224" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-fqwtv;K8S_POD_INFRA_CONTAINER_ID=41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b;K8S_POD_UID=f965b971-7e9a-4513-8450-b2b527609bd6" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv/f965b971-7e9a-4513-8450-b2b527609bd6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-fqwtv?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.156377 master-0 kubenswrapper[7604]: > pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:27:49.157552 master-0 kubenswrapper[7604]: E0309 16:27:49.156402 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-854648ff6d-fqwtv_openshift-operator-lifecycle-manager(f965b971-7e9a-4513-8450-b2b527609bd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-854648ff6d-fqwtv_openshift-operator-lifecycle-manager(f965b971-7e9a-4513-8450-b2b527609bd6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-854648ff6d-fqwtv_openshift-operator-lifecycle-manager_f965b971-7e9a-4513-8450-b2b527609bd6_0(41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b): error adding pod openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b\\\" Netns:\\\"/var/run/netns/3a0a4ff0-1a49-4f82-ad4d-e9a9c82be224\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-854648ff6d-fqwtv;K8S_POD_INFRA_CONTAINER_ID=41657c10e1153afc50286b80981cd9e44d47c8498503fe701bdc9101d8b4066b;K8S_POD_UID=f965b971-7e9a-4513-8450-b2b527609bd6\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv/f965b971-7e9a-4513-8450-b2b527609bd6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-854648ff6d-fqwtv in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-854648ff6d-fqwtv?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" podUID="f965b971-7e9a-4513-8450-b2b527609bd6" Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: E0309 16:27:49.170286 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-qtmrd_openshift-operator-lifecycle-manager_be86c85d-59b1-4279-8253-a998ca16cd4d_0(ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-qtmrd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761" Netns:"/var/run/netns/cd9958c2-fa91-4c45-b691-6de757bb28f4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-qtmrd;K8S_POD_INFRA_CONTAINER_ID=ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761;K8S_POD_UID=be86c85d-59b1-4279-8253-a998ca16cd4d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd/be86c85d-59b1-4279-8253-a998ca16cd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-qtmrd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: E0309 16:27:49.170355 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-qtmrd_openshift-operator-lifecycle-manager_be86c85d-59b1-4279-8253-a998ca16cd4d_0(ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-qtmrd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761" Netns:"/var/run/netns/cd9958c2-fa91-4c45-b691-6de757bb28f4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-qtmrd;K8S_POD_INFRA_CONTAINER_ID=ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761;K8S_POD_UID=be86c85d-59b1-4279-8253-a998ca16cd4d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd/be86c85d-59b1-4279-8253-a998ca16cd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-qtmrd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.170357 master-0 kubenswrapper[7604]: > pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:27:49.170956 master-0 kubenswrapper[7604]: E0309 16:27:49.170376 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.170956 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-qtmrd_openshift-operator-lifecycle-manager_be86c85d-59b1-4279-8253-a998ca16cd4d_0(ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-qtmrd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761" Netns:"/var/run/netns/cd9958c2-fa91-4c45-b691-6de757bb28f4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-qtmrd;K8S_POD_INFRA_CONTAINER_ID=ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761;K8S_POD_UID=be86c85d-59b1-4279-8253-a998ca16cd4d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd/be86c85d-59b1-4279-8253-a998ca16cd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-qtmrd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.170956 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.170956 master-0 kubenswrapper[7604]: > pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:27:49.170956 master-0 kubenswrapper[7604]: E0309 16:27:49.170465 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"olm-operator-d64cfc9db-qtmrd_openshift-operator-lifecycle-manager(be86c85d-59b1-4279-8253-a998ca16cd4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"olm-operator-d64cfc9db-qtmrd_openshift-operator-lifecycle-manager(be86c85d-59b1-4279-8253-a998ca16cd4d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-d64cfc9db-qtmrd_openshift-operator-lifecycle-manager_be86c85d-59b1-4279-8253-a998ca16cd4d_0(ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761): error adding pod openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-qtmrd to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761\\\" Netns:\\\"/var/run/netns/cd9958c2-fa91-4c45-b691-6de757bb28f4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-d64cfc9db-qtmrd;K8S_POD_INFRA_CONTAINER_ID=ebebf7984f7e921f85a14898194c966ff85b0d3452d2ef3a389fdfd367b9e761;K8S_POD_UID=be86c85d-59b1-4279-8253-a998ca16cd4d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd/be86c85d-59b1-4279-8253-a998ca16cd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-d64cfc9db-qtmrd in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-d64cfc9db-qtmrd?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" podUID="be86c85d-59b1-4279-8253-a998ca16cd4d" Mar 09 16:27:49.305681 master-0 kubenswrapper[7604]: E0309 16:27:49.305600 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.305681 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-8lvt9_openshift-monitoring_004d1e93-2345-4e62-902c-33f9dbb0f397_0(5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-8lvt9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922" Netns:"/var/run/netns/281db752-2bc4-441e-a75e-825d55fd389e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-8lvt9;K8S_POD_INFRA_CONTAINER_ID=5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922;K8S_POD_UID=004d1e93-2345-4e62-902c-33f9dbb0f397" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9/004d1e93-2345-4e62-902c-33f9dbb0f397]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-674cbfbd9d-8lvt9) Mar 09 16:27:49.305681 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.305681 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: E0309 16:27:49.305698 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-8lvt9_openshift-monitoring_004d1e93-2345-4e62-902c-33f9dbb0f397_0(5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-8lvt9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922" Netns:"/var/run/netns/281db752-2bc4-441e-a75e-825d55fd389e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-8lvt9;K8S_POD_INFRA_CONTAINER_ID=5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922;K8S_POD_UID=004d1e93-2345-4e62-902c-33f9dbb0f397" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9/004d1e93-2345-4e62-902c-33f9dbb0f397]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-674cbfbd9d-8lvt9) Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: > pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: E0309 16:27:49.305720 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-8lvt9_openshift-monitoring_004d1e93-2345-4e62-902c-33f9dbb0f397_0(5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-8lvt9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922" Netns:"/var/run/netns/281db752-2bc4-441e-a75e-825d55fd389e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-8lvt9;K8S_POD_INFRA_CONTAINER_ID=5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922;K8S_POD_UID=004d1e93-2345-4e62-902c-33f9dbb0f397" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9/004d1e93-2345-4e62-902c-33f9dbb0f397]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-674cbfbd9d-8lvt9) Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: > pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:27:49.305858 master-0 kubenswrapper[7604]: E0309 16:27:49.305782 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-monitoring-operator-674cbfbd9d-8lvt9_openshift-monitoring(004d1e93-2345-4e62-902c-33f9dbb0f397)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-monitoring-operator-674cbfbd9d-8lvt9_openshift-monitoring(004d1e93-2345-4e62-902c-33f9dbb0f397)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-674cbfbd9d-8lvt9_openshift-monitoring_004d1e93-2345-4e62-902c-33f9dbb0f397_0(5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922): error adding pod openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-8lvt9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922\\\" Netns:\\\"/var/run/netns/281db752-2bc4-441e-a75e-825d55fd389e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-674cbfbd9d-8lvt9;K8S_POD_INFRA_CONTAINER_ID=5cf3a1f6efc261d9060aaae8652ba68e7090be7b8d89f725924a8949d07f8922;K8S_POD_UID=004d1e93-2345-4e62-902c-33f9dbb0f397\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9/004d1e93-2345-4e62-902c-33f9dbb0f397]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-674cbfbd9d-8lvt9 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-674cbfbd9d-8lvt9)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" podUID="004d1e93-2345-4e62-902c-33f9dbb0f397" Mar 09 16:27:49.307343 master-0 kubenswrapper[7604]: E0309 16:27:49.307290 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.307343 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-operator-589895fbb7-6sknh_openshift-dns-operator_72739f4d-da25-493b-91ef-d2b64e9297dd_0(15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c): error adding pod openshift-dns-operator_dns-operator-589895fbb7-6sknh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c" Netns:"/var/run/netns/1531205b-978d-49dc-8af6-623a9e06f070" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-589895fbb7-6sknh;K8S_POD_INFRA_CONTAINER_ID=15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c;K8S_POD_UID=72739f4d-da25-493b-91ef-d2b64e9297dd" Path:"" ERRORED: error configuring pod [openshift-dns-operator/dns-operator-589895fbb7-6sknh] networking: Multus: [openshift-dns-operator/dns-operator-589895fbb7-6sknh/72739f4d-da25-493b-91ef-d2b64e9297dd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: SetNetworkStatus: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-operator-589895fbb7-6sknh) Mar 09 16:27:49.307343 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.307343 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.307517 master-0 kubenswrapper[7604]: E0309 16:27:49.307367 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.307517 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-operator-589895fbb7-6sknh_openshift-dns-operator_72739f4d-da25-493b-91ef-d2b64e9297dd_0(15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c): error adding pod openshift-dns-operator_dns-operator-589895fbb7-6sknh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c" Netns:"/var/run/netns/1531205b-978d-49dc-8af6-623a9e06f070" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-589895fbb7-6sknh;K8S_POD_INFRA_CONTAINER_ID=15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c;K8S_POD_UID=72739f4d-da25-493b-91ef-d2b64e9297dd" Path:"" ERRORED: error configuring pod [openshift-dns-operator/dns-operator-589895fbb7-6sknh] networking: Multus: [openshift-dns-operator/dns-operator-589895fbb7-6sknh/72739f4d-da25-493b-91ef-d2b64e9297dd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: SetNetworkStatus: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-operator-589895fbb7-6sknh) Mar 09 16:27:49.307517 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.307517 master-0 kubenswrapper[7604]: > pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:27:49.307683 master-0 kubenswrapper[7604]: E0309 16:27:49.307652 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.307683 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-operator-589895fbb7-6sknh_openshift-dns-operator_72739f4d-da25-493b-91ef-d2b64e9297dd_0(15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c): error adding pod openshift-dns-operator_dns-operator-589895fbb7-6sknh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c" Netns:"/var/run/netns/1531205b-978d-49dc-8af6-623a9e06f070" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-589895fbb7-6sknh;K8S_POD_INFRA_CONTAINER_ID=15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c;K8S_POD_UID=72739f4d-da25-493b-91ef-d2b64e9297dd" Path:"" ERRORED: error configuring pod [openshift-dns-operator/dns-operator-589895fbb7-6sknh] networking: Multus: [openshift-dns-operator/dns-operator-589895fbb7-6sknh/72739f4d-da25-493b-91ef-d2b64e9297dd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: SetNetworkStatus: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-operator-589895fbb7-6sknh) Mar 09 16:27:49.307683 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.307683 master-0 kubenswrapper[7604]: > pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:27:49.307794 master-0 kubenswrapper[7604]: E0309 16:27:49.307735 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dns-operator-589895fbb7-6sknh_openshift-dns-operator(72739f4d-da25-493b-91ef-d2b64e9297dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dns-operator-589895fbb7-6sknh_openshift-dns-operator(72739f4d-da25-493b-91ef-d2b64e9297dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-operator-589895fbb7-6sknh_openshift-dns-operator_72739f4d-da25-493b-91ef-d2b64e9297dd_0(15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c): error adding pod openshift-dns-operator_dns-operator-589895fbb7-6sknh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c\\\" Netns:\\\"/var/run/netns/1531205b-978d-49dc-8af6-623a9e06f070\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-dns-operator;K8S_POD_NAME=dns-operator-589895fbb7-6sknh;K8S_POD_INFRA_CONTAINER_ID=15e798de484a8e433071cc8e002b10ecb390218a4586b149d6f9ea3b9f00225c;K8S_POD_UID=72739f4d-da25-493b-91ef-d2b64e9297dd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-dns-operator/dns-operator-589895fbb7-6sknh] networking: Multus: [openshift-dns-operator/dns-operator-589895fbb7-6sknh/72739f4d-da25-493b-91ef-d2b64e9297dd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: SetNetworkStatus: failed to update the pod dns-operator-589895fbb7-6sknh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-operator-589895fbb7-6sknh)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" podUID="72739f4d-da25-493b-91ef-d2b64e9297dd" Mar 09 16:27:49.386050 master-0 kubenswrapper[7604]: E0309 16:27:49.386018 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.386050 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api_fa7f88a3-9845-49a3-a108-d524df592961_0(352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f" Netns:"/var/run/netns/2ef36ac8-1c26-48d6-b4d8-f4d96aff7909" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-p27tf;K8S_POD_INFRA_CONTAINER_ID=352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f;K8S_POD_UID=fa7f88a3-9845-49a3-a108-d524df592961" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf/fa7f88a3-9845-49a3-a108-d524df592961]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-p27tf?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.386050 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.386050 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.386257 master-0 kubenswrapper[7604]: E0309 16:27:49.386239 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.386257 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api_fa7f88a3-9845-49a3-a108-d524df592961_0(352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f" Netns:"/var/run/netns/2ef36ac8-1c26-48d6-b4d8-f4d96aff7909" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-p27tf;K8S_POD_INFRA_CONTAINER_ID=352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f;K8S_POD_UID=fa7f88a3-9845-49a3-a108-d524df592961" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf/fa7f88a3-9845-49a3-a108-d524df592961]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-p27tf?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.386257 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.386257 master-0 kubenswrapper[7604]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:27:49.386401 master-0 kubenswrapper[7604]: E0309 16:27:49.386388 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.386401 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api_fa7f88a3-9845-49a3-a108-d524df592961_0(352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f" Netns:"/var/run/netns/2ef36ac8-1c26-48d6-b4d8-f4d96aff7909" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-p27tf;K8S_POD_INFRA_CONTAINER_ID=352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f;K8S_POD_UID=fa7f88a3-9845-49a3-a108-d524df592961" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf/fa7f88a3-9845-49a3-a108-d524df592961]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-p27tf?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.386401 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.386401 master-0 kubenswrapper[7604]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:27:49.386675 master-0 kubenswrapper[7604]: E0309 16:27:49.386627 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api(fa7f88a3-9845-49a3-a108-d524df592961)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api(fa7f88a3-9845-49a3-a108-d524df592961)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api_fa7f88a3-9845-49a3-a108-d524df592961_0(352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f\\\" Netns:\\\"/var/run/netns/2ef36ac8-1c26-48d6-b4d8-f4d96aff7909\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-p27tf;K8S_POD_INFRA_CONTAINER_ID=352d41152e9bc1b4a8bde621458d54688f215d5abe92ab0f75892b3be8543e0f;K8S_POD_UID=fa7f88a3-9845-49a3-a108-d524df592961\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf/fa7f88a3-9845-49a3-a108-d524df592961]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-p27tf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-p27tf?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" podUID="fa7f88a3-9845-49a3-a108-d524df592961" Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: E0309 16:27:49.395948 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-g8n5t_openshift-multus_4bd3c489-427c-4a47-b7b9-5d1611b9be12_0(801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514): error adding pod openshift-multus_multus-admission-controller-8d675b596-g8n5t to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514" Netns:"/var/run/netns/91612d15-93cb-4588-baaa-63c0cd19716d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-g8n5t;K8S_POD_INFRA_CONTAINER_ID=801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514;K8S_POD_UID=4bd3c489-427c-4a47-b7b9-5d1611b9be12" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-g8n5t] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-g8n5t/4bd3c489-427c-4a47-b7b9-5d1611b9be12]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-g8n5t?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: E0309 16:27:49.396003 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-g8n5t_openshift-multus_4bd3c489-427c-4a47-b7b9-5d1611b9be12_0(801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514): error adding pod openshift-multus_multus-admission-controller-8d675b596-g8n5t to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514" Netns:"/var/run/netns/91612d15-93cb-4588-baaa-63c0cd19716d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-g8n5t;K8S_POD_INFRA_CONTAINER_ID=801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514;K8S_POD_UID=4bd3c489-427c-4a47-b7b9-5d1611b9be12" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-g8n5t] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-g8n5t/4bd3c489-427c-4a47-b7b9-5d1611b9be12]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-g8n5t?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: > pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: E0309 16:27:49.396034 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-g8n5t_openshift-multus_4bd3c489-427c-4a47-b7b9-5d1611b9be12_0(801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514): error adding pod openshift-multus_multus-admission-controller-8d675b596-g8n5t to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514" Netns:"/var/run/netns/91612d15-93cb-4588-baaa-63c0cd19716d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-g8n5t;K8S_POD_INFRA_CONTAINER_ID=801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514;K8S_POD_UID=4bd3c489-427c-4a47-b7b9-5d1611b9be12" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-g8n5t] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-g8n5t/4bd3c489-427c-4a47-b7b9-5d1611b9be12]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-g8n5t?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: > pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:27:49.396416 master-0 kubenswrapper[7604]: E0309 16:27:49.396101 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-8d675b596-g8n5t_openshift-multus(4bd3c489-427c-4a47-b7b9-5d1611b9be12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-8d675b596-g8n5t_openshift-multus(4bd3c489-427c-4a47-b7b9-5d1611b9be12)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-8d675b596-g8n5t_openshift-multus_4bd3c489-427c-4a47-b7b9-5d1611b9be12_0(801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514): error adding pod openshift-multus_multus-admission-controller-8d675b596-g8n5t to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514\\\" Netns:\\\"/var/run/netns/91612d15-93cb-4588-baaa-63c0cd19716d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-8d675b596-g8n5t;K8S_POD_INFRA_CONTAINER_ID=801b57937a5bb44c836acc829f621f90db1158065cf935d53d415faa8721d514;K8S_POD_UID=4bd3c489-427c-4a47-b7b9-5d1611b9be12\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-8d675b596-g8n5t] networking: Multus: [openshift-multus/multus-admission-controller-8d675b596-g8n5t/4bd3c489-427c-4a47-b7b9-5d1611b9be12]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-8d675b596-g8n5t in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-8d675b596-g8n5t?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" Mar 09 16:27:49.399799 master-0 kubenswrapper[7604]: E0309 16:27:49.399741 7604 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 09 16:27:49.399799 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-n7slb_openshift-multus_ef122f26-bfae-44d2-a70a-8507b3b47332_0(f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc): error adding pod openshift-multus_network-metrics-daemon-n7slb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc" Netns:"/var/run/netns/4949dd30-59f4-4dbb-aa50-f06d7293a289" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-n7slb;K8S_POD_INFRA_CONTAINER_ID=f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc;K8S_POD_UID=ef122f26-bfae-44d2-a70a-8507b3b47332" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-n7slb] networking: Multus: [openshift-multus/network-metrics-daemon-n7slb/ef122f26-bfae-44d2-a70a-8507b3b47332]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-n7slb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.399799 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.399799 master-0 kubenswrapper[7604]: > Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: E0309 16:27:49.399832 7604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-n7slb_openshift-multus_ef122f26-bfae-44d2-a70a-8507b3b47332_0(f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc): error adding pod openshift-multus_network-metrics-daemon-n7slb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc" Netns:"/var/run/netns/4949dd30-59f4-4dbb-aa50-f06d7293a289" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-n7slb;K8S_POD_INFRA_CONTAINER_ID=f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc;K8S_POD_UID=ef122f26-bfae-44d2-a70a-8507b3b47332" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-n7slb] networking: Multus: [openshift-multus/network-metrics-daemon-n7slb/ef122f26-bfae-44d2-a70a-8507b3b47332]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-n7slb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: > pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: E0309 16:27:49.399857 7604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-n7slb_openshift-multus_ef122f26-bfae-44d2-a70a-8507b3b47332_0(f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc): error adding pod openshift-multus_network-metrics-daemon-n7slb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc" Netns:"/var/run/netns/4949dd30-59f4-4dbb-aa50-f06d7293a289" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-n7slb;K8S_POD_INFRA_CONTAINER_ID=f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc;K8S_POD_UID=ef122f26-bfae-44d2-a70a-8507b3b47332" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-n7slb] networking: Multus: [openshift-multus/network-metrics-daemon-n7slb/ef122f26-bfae-44d2-a70a-8507b3b47332]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-n7slb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: > pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:27:49.400000 master-0 kubenswrapper[7604]: E0309 16:27:49.399938 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-n7slb_openshift-multus(ef122f26-bfae-44d2-a70a-8507b3b47332)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-n7slb_openshift-multus(ef122f26-bfae-44d2-a70a-8507b3b47332)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-n7slb_openshift-multus_ef122f26-bfae-44d2-a70a-8507b3b47332_0(f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc): error adding pod openshift-multus_network-metrics-daemon-n7slb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc\\\" Netns:\\\"/var/run/netns/4949dd30-59f4-4dbb-aa50-f06d7293a289\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-n7slb;K8S_POD_INFRA_CONTAINER_ID=f1f26e63dc72d4c6ee83a80c0405d847d7cf0308d3149e331a09a98f815e33dc;K8S_POD_UID=ef122f26-bfae-44d2-a70a-8507b3b47332\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-n7slb] networking: Multus: [openshift-multus/network-metrics-daemon-n7slb/ef122f26-bfae-44d2-a70a-8507b3b47332]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-n7slb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-n7slb?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/network-metrics-daemon-n7slb" podUID="ef122f26-bfae-44d2-a70a-8507b3b47332" Mar 09 16:27:51.054924 master-0 kubenswrapper[7604]: I0309 16:27:51.054795 7604 generic.go:334] "Generic (PLEG): container finished" podID="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" containerID="7c3fbf08ff6da10a25d918bd4cbabfd4c79ce8ba8a9c8a411b80c1c351bae8a7" exitCode=0 Mar 09 16:27:51.056322 master-0 kubenswrapper[7604]: I0309 16:27:51.056296 7604 generic.go:334] "Generic (PLEG): container finished" podID="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" containerID="1e5e32f0f63434eb2622b072a5c0a325920460736fce227cb33b7dd8fc950069" exitCode=0 Mar 09 16:27:55.229163 master-0 kubenswrapper[7604]: E0309 16:27:55.229054 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 09 16:27:57.823467 master-0 kubenswrapper[7604]: E0309 16:27:57.821143 7604 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.7s" Mar 09 16:27:57.830190 master-0 kubenswrapper[7604]: I0309 16:27:57.830134 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838580 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838634 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838648 7604 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="5542ec7f-4ba2-41f5-9e48-3f5fbd8ab5c8" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838675 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838686 7604 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="5542ec7f-4ba2-41f5-9e48-3f5fbd8ab5c8" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838697 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"963633a2-3f9d-4b82-9e53-d749fa52bf8e","Type":"ContainerDied","Data":"d41d86bd25e4bbee52e08006f2bc72adad98a14d24d258528deb873f333249a6"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838721 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1e5298b1-ccde-4c18-8cdb-f415a4842f75","Type":"ContainerDied","Data":"99a339c5f3968e16e82464c06f5f8bce759eee7e72f76870e9bcaf5b40dfae4f"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838736 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"35ea1971363594acb6e2af9ffc0246bb0a5c5f470f8d574da32d0f7bbc775968"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838753 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" event={"ID":"5565c060-5952-4e85-8873-18bb80663924","Type":"ContainerDied","Data":"a8d177dbb3aa3504d7da8194a33995b9c5590e73006f731e32a19254943a15e2"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838763 7604 scope.go:117] "RemoveContainer" containerID="a517766120d5207dbc0746849224568d7e6239234bc628933b81ef9e4c5bff53" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.838766 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" event={"ID":"d6912539-9b06-4e2c-b6a8-155df31147f2","Type":"ContainerDied","Data":"a517766120d5207dbc0746849224568d7e6239234bc628933b81ef9e4c5bff53"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839094 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerDied","Data":"c33568491251a6cc29f433d394d9f99ae4624c6f4d925ee43ed4349c74f3003e"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839113 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"cdee4fd47317482d2314470b8d7e76453519a7ffb89e09ee80444b9e7dc9b818"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839123 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"019a53aacd83e37d8e9ec3c064556104c3d28abe8d9353b3fe0029fa09706cde"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839131 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"1ef790e4963197709ca73ccb0ef459f616a446f12d2312254e29118d5fbf4647"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839140 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"9246a82d36d6d839dd216afb960c961d28bf9631aa040ddcbe7751de007686ca"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839148 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"deb49cc582b4f05da3e439b71cfab3c7b565bd681dbf4fabe99e76944648f931"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839156 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" event={"ID":"34a4491c-12cc-4531-ad3e-246e93ed7842","Type":"ContainerDied","Data":"fa5ddd5802e33c8a6619b86d4545b8a3364c98e851507c10917062099a64157c"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839169 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" event={"ID":"e2e38be5-1d33-4171-b27f-78a335f1590b","Type":"ContainerDied","Data":"aae9b4fa27818489ab82742a1d088f45fbd99626e96c87f0d251b8c8d0c8bce4"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839180 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"8ec03662bfb689a4764f7edbb538732c79e6e42855becb27b7223236cfbfeaa7"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839192 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839203 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" event={"ID":"6c4dfdcc-e182-4831-98e4-1eedb069bcf6","Type":"ContainerDied","Data":"7c3fbf08ff6da10a25d918bd4cbabfd4c79ce8ba8a9c8a411b80c1c351bae8a7"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839214 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" event={"ID":"6cf9eae5-38bc-48fa-8339-d0751bb18e8c","Type":"ContainerDied","Data":"1e5e32f0f63434eb2622b072a5c0a325920460736fce227cb33b7dd8fc950069"} Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839262 7604 scope.go:117] "RemoveContainer" containerID="d44b4281b666f32d8647c6a143f074eebe2a44e65e8dee2574808efbf233ffa9" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839498 7604 scope.go:117] "RemoveContainer" containerID="7c3fbf08ff6da10a25d918bd4cbabfd4c79ce8ba8a9c8a411b80c1c351bae8a7" Mar 09 16:27:57.839578 master-0 kubenswrapper[7604]: I0309 16:27:57.839542 7604 scope.go:117] "RemoveContainer" containerID="c33568491251a6cc29f433d394d9f99ae4624c6f4d925ee43ed4349c74f3003e" Mar 09 16:27:57.842583 master-0 kubenswrapper[7604]: I0309 16:27:57.842318 7604 scope.go:117] "RemoveContainer" containerID="1e5e32f0f63434eb2622b072a5c0a325920460736fce227cb33b7dd8fc950069" Mar 09 16:27:57.843355 master-0 kubenswrapper[7604]: I0309 16:27:57.843309 7604 scope.go:117] "RemoveContainer" containerID="fa5ddd5802e33c8a6619b86d4545b8a3364c98e851507c10917062099a64157c" Mar 09 16:27:57.845634 master-0 kubenswrapper[7604]: I0309 16:27:57.845527 7604 scope.go:117] "RemoveContainer" containerID="a8d177dbb3aa3504d7da8194a33995b9c5590e73006f731e32a19254943a15e2" Mar 09 16:27:57.846576 master-0 kubenswrapper[7604]: I0309 16:27:57.846227 7604 scope.go:117] "RemoveContainer" containerID="aae9b4fa27818489ab82742a1d088f45fbd99626e96c87f0d251b8c8d0c8bce4" Mar 09 16:27:57.882556 master-0 kubenswrapper[7604]: I0309 16:27:57.882506 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:27:57.891028 master-0 kubenswrapper[7604]: I0309 16:27:57.886075 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 09 16:27:57.897946 master-0 kubenswrapper[7604]: I0309 16:27:57.897903 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 09 16:27:58.096479 master-0 kubenswrapper[7604]: I0309 16:27:58.096390 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.096366047 podStartE2EDuration="1.096366047s" podCreationTimestamp="2026-03-09 16:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:27:58.092856044 +0000 UTC m=+135.146825467" watchObservedRunningTime="2026-03-09 16:27:58.096366047 +0000 UTC m=+135.150335550" Mar 09 16:27:58.100276 master-0 kubenswrapper[7604]: I0309 16:27:58.100226 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" event={"ID":"e2e38be5-1d33-4171-b27f-78a335f1590b","Type":"ContainerStarted","Data":"26536dc0c3eb884535f611edd83aab852a51eeb18c5af26fe55fde4610066f56"} Mar 09 16:27:58.107468 master-0 kubenswrapper[7604]: I0309 16:27:58.106813 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/0.log" Mar 09 16:27:58.108717 master-0 kubenswrapper[7604]: I0309 16:27:58.108670 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerStarted","Data":"13f8ce747ae94aa028643a0d90bae20ae130da211dc31135e5f8daffa80a000f"} Mar 09 16:27:58.112643 master-0 kubenswrapper[7604]: I0309 16:27:58.112609 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_6d95c7ed-e3ea-4383-b083-1df5df078f1c/installer/0.log" Mar 09 16:27:58.112721 master-0 kubenswrapper[7604]: I0309 16:27:58.112662 7604 generic.go:334] "Generic (PLEG): container finished" podID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerID="8de19850c9308d09c5cd12077a0d9362d507f0d6192f1e12c63ed63d09fea438" exitCode=1 Mar 09 16:27:58.112867 master-0 kubenswrapper[7604]: I0309 16:27:58.112828 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"6d95c7ed-e3ea-4383-b083-1df5df078f1c","Type":"ContainerDied","Data":"8de19850c9308d09c5cd12077a0d9362d507f0d6192f1e12c63ed63d09fea438"} Mar 09 16:27:58.166457 master-0 kubenswrapper[7604]: E0309 16:27:58.164742 7604 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 09 16:27:58.412121 master-0 kubenswrapper[7604]: I0309 16:27:58.412074 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_963633a2-3f9d-4b82-9e53-d749fa52bf8e/installer/0.log" Mar 09 16:27:58.412313 master-0 kubenswrapper[7604]: I0309 16:27:58.412153 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:27:58.421954 master-0 kubenswrapper[7604]: I0309 16:27:58.421897 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-var-lock\") pod \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " Mar 09 16:27:58.422167 master-0 kubenswrapper[7604]: I0309 16:27:58.422022 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kube-api-access\") pod \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " Mar 09 16:27:58.422167 master-0 kubenswrapper[7604]: I0309 16:27:58.422078 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kubelet-dir\") pod \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\" (UID: \"963633a2-3f9d-4b82-9e53-d749fa52bf8e\") " Mar 09 16:27:58.422336 master-0 kubenswrapper[7604]: I0309 16:27:58.422064 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-var-lock" (OuterVolumeSpecName: "var-lock") pod "963633a2-3f9d-4b82-9e53-d749fa52bf8e" (UID: "963633a2-3f9d-4b82-9e53-d749fa52bf8e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:58.422417 master-0 kubenswrapper[7604]: I0309 16:27:58.422307 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "963633a2-3f9d-4b82-9e53-d749fa52bf8e" (UID: "963633a2-3f9d-4b82-9e53-d749fa52bf8e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:58.425085 master-0 kubenswrapper[7604]: I0309 16:27:58.425045 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "963633a2-3f9d-4b82-9e53-d749fa52bf8e" (UID: "963633a2-3f9d-4b82-9e53-d749fa52bf8e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:27:58.432575 master-0 kubenswrapper[7604]: I0309 16:27:58.432526 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1e5298b1-ccde-4c18-8cdb-f415a4842f75/installer/0.log" Mar 09 16:27:58.432790 master-0 kubenswrapper[7604]: I0309 16:27:58.432595 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:27:58.523347 master-0 kubenswrapper[7604]: I0309 16:27:58.523218 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kube-api-access\") pod \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " Mar 09 16:27:58.523347 master-0 kubenswrapper[7604]: I0309 16:27:58.523311 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-var-lock\") pod \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " Mar 09 16:27:58.523347 master-0 kubenswrapper[7604]: I0309 16:27:58.523358 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kubelet-dir\") pod \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\" (UID: \"1e5298b1-ccde-4c18-8cdb-f415a4842f75\") " Mar 09 16:27:58.523716 master-0 kubenswrapper[7604]: I0309 16:27:58.523678 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-var-lock" (OuterVolumeSpecName: "var-lock") pod "1e5298b1-ccde-4c18-8cdb-f415a4842f75" (UID: "1e5298b1-ccde-4c18-8cdb-f415a4842f75"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:58.523817 master-0 kubenswrapper[7604]: I0309 16:27:58.523763 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1e5298b1-ccde-4c18-8cdb-f415a4842f75" (UID: "1e5298b1-ccde-4c18-8cdb-f415a4842f75"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:58.523877 master-0 kubenswrapper[7604]: I0309 16:27:58.523712 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:58.523973 master-0 kubenswrapper[7604]: I0309 16:27:58.523957 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963633a2-3f9d-4b82-9e53-d749fa52bf8e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:58.524053 master-0 kubenswrapper[7604]: I0309 16:27:58.524041 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963633a2-3f9d-4b82-9e53-d749fa52bf8e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:58.526382 master-0 kubenswrapper[7604]: I0309 16:27:58.526231 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1e5298b1-ccde-4c18-8cdb-f415a4842f75" (UID: "1e5298b1-ccde-4c18-8cdb-f415a4842f75"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:27:58.625364 master-0 kubenswrapper[7604]: I0309 16:27:58.625265 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:58.625364 master-0 kubenswrapper[7604]: I0309 16:27:58.625318 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e5298b1-ccde-4c18-8cdb-f415a4842f75-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:58.625364 master-0 kubenswrapper[7604]: I0309 16:27:58.625331 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1e5298b1-ccde-4c18-8cdb-f415a4842f75-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:59.111346 master-0 kubenswrapper[7604]: I0309 16:27:59.111129 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:27:59.112030 master-0 kubenswrapper[7604]: I0309 16:27:59.111639 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:27:59.125305 master-0 kubenswrapper[7604]: I0309 16:27:59.125132 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103a81df-6dfb-42d3-bc03-4391681c3e35" path="/var/lib/kubelet/pods/103a81df-6dfb-42d3-bc03-4391681c3e35/volumes" Mar 09 16:27:59.125998 master-0 kubenswrapper[7604]: I0309 16:27:59.125963 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1e5298b1-ccde-4c18-8cdb-f415a4842f75/installer/0.log" Mar 09 16:27:59.126469 master-0 kubenswrapper[7604]: I0309 16:27:59.126398 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1e5298b1-ccde-4c18-8cdb-f415a4842f75","Type":"ContainerDied","Data":"3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78"} Mar 09 16:27:59.126469 master-0 kubenswrapper[7604]: I0309 16:27:59.126449 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78" Mar 09 16:27:59.126469 master-0 kubenswrapper[7604]: I0309 16:27:59.126460 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:27:59.133635 master-0 kubenswrapper[7604]: I0309 16:27:59.133224 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" event={"ID":"d6912539-9b06-4e2c-b6a8-155df31147f2","Type":"ContainerStarted","Data":"cd7efe315849cdb3199a98f6f5c36f77f4fa9f5957ff9a8e14c0814b556fdc59"} Mar 09 16:27:59.136773 master-0 kubenswrapper[7604]: I0309 16:27:59.136604 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" event={"ID":"6c4dfdcc-e182-4831-98e4-1eedb069bcf6","Type":"ContainerStarted","Data":"0890855b3b5026503838ed97808495935321e600acd88d8055621af6b2d87521"} Mar 09 16:27:59.139917 master-0 kubenswrapper[7604]: I0309 16:27:59.139887 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_963633a2-3f9d-4b82-9e53-d749fa52bf8e/installer/0.log" Mar 09 16:27:59.140009 master-0 kubenswrapper[7604]: I0309 16:27:59.139977 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"963633a2-3f9d-4b82-9e53-d749fa52bf8e","Type":"ContainerDied","Data":"7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee"} Mar 09 16:27:59.140052 master-0 kubenswrapper[7604]: I0309 16:27:59.140008 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee" Mar 09 16:27:59.140135 master-0 kubenswrapper[7604]: I0309 16:27:59.140095 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:27:59.142258 master-0 kubenswrapper[7604]: I0309 16:27:59.142204 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" event={"ID":"6cf9eae5-38bc-48fa-8339-d0751bb18e8c","Type":"ContainerStarted","Data":"5d8c100b8bc3cd727e168a74c2e48d870e8a9516215f22c217ef9c223c8bfc22"} Mar 09 16:27:59.150636 master-0 kubenswrapper[7604]: I0309 16:27:59.150581 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" event={"ID":"34a4491c-12cc-4531-ad3e-246e93ed7842","Type":"ContainerStarted","Data":"49dd8e161cea6212329f1712e1bf4a0806751557004321c54967d70157f3883b"} Mar 09 16:27:59.154045 master-0 kubenswrapper[7604]: I0309 16:27:59.153870 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-r82z7_5565c060-5952-4e85-8873-18bb80663924/network-operator/0.log" Mar 09 16:27:59.154045 master-0 kubenswrapper[7604]: I0309 16:27:59.153983 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" event={"ID":"5565c060-5952-4e85-8873-18bb80663924","Type":"ContainerStarted","Data":"dda1c1f36a6b6d9ac75b2bd00d887fa58cc2391c73527d2f8cbd81621d10c3e4"} Mar 09 16:27:59.414809 master-0 kubenswrapper[7604]: I0309 16:27:59.414739 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_6d95c7ed-e3ea-4383-b083-1df5df078f1c/installer/0.log" Mar 09 16:27:59.415195 master-0 kubenswrapper[7604]: I0309 16:27:59.414846 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:27:59.432667 master-0 kubenswrapper[7604]: I0309 16:27:59.432446 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "6d95c7ed-e3ea-4383-b083-1df5df078f1c" (UID: "6d95c7ed-e3ea-4383-b083-1df5df078f1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:59.432667 master-0 kubenswrapper[7604]: I0309 16:27:59.432483 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-var-lock\") pod \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " Mar 09 16:27:59.432667 master-0 kubenswrapper[7604]: I0309 16:27:59.432670 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kube-api-access\") pod \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " Mar 09 16:27:59.439894 master-0 kubenswrapper[7604]: I0309 16:27:59.432748 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kubelet-dir\") pod \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\" (UID: \"6d95c7ed-e3ea-4383-b083-1df5df078f1c\") " Mar 09 16:27:59.439894 master-0 kubenswrapper[7604]: I0309 16:27:59.432861 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6d95c7ed-e3ea-4383-b083-1df5df078f1c" (UID: "6d95c7ed-e3ea-4383-b083-1df5df078f1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:27:59.439894 master-0 kubenswrapper[7604]: I0309 16:27:59.433247 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:59.439894 master-0 kubenswrapper[7604]: I0309 16:27:59.433271 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6d95c7ed-e3ea-4383-b083-1df5df078f1c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:59.439894 master-0 kubenswrapper[7604]: I0309 16:27:59.435366 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6d95c7ed-e3ea-4383-b083-1df5df078f1c" (UID: "6d95c7ed-e3ea-4383-b083-1df5df078f1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:27:59.534871 master-0 kubenswrapper[7604]: I0309 16:27:59.534800 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d95c7ed-e3ea-4383-b083-1df5df078f1c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:27:59.586574 master-0 kubenswrapper[7604]: I0309 16:27:59.586510 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl"] Mar 09 16:27:59.591008 master-0 kubenswrapper[7604]: W0309 16:27:59.590927 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd15da434_241d_4a93_9ce3_f943d43bf2ce.slice/crio-6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b WatchSource:0}: Error finding container 6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b: Status 404 returned error can't find the container with id 6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b Mar 09 16:27:59.791753 master-0 kubenswrapper[7604]: I0309 16:27:59.791627 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 09 16:28:00.111031 master-0 kubenswrapper[7604]: I0309 16:28:00.110886 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:28:00.111031 master-0 kubenswrapper[7604]: I0309 16:28:00.110959 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:28:00.111333 master-0 kubenswrapper[7604]: I0309 16:28:00.111304 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:28:00.111415 master-0 kubenswrapper[7604]: I0309 16:28:00.111385 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:28:00.111766 master-0 kubenswrapper[7604]: I0309 16:28:00.111599 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:28:00.111820 master-0 kubenswrapper[7604]: I0309 16:28:00.111799 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:28:00.166564 master-0 kubenswrapper[7604]: I0309 16:28:00.166485 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" event={"ID":"d15da434-241d-4a93-9ce3-f943d43bf2ce","Type":"ContainerStarted","Data":"6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b"} Mar 09 16:28:00.173190 master-0 kubenswrapper[7604]: I0309 16:28:00.172344 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_6d95c7ed-e3ea-4383-b083-1df5df078f1c/installer/0.log" Mar 09 16:28:00.173190 master-0 kubenswrapper[7604]: I0309 16:28:00.172944 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:28:00.173190 master-0 kubenswrapper[7604]: I0309 16:28:00.172921 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"6d95c7ed-e3ea-4383-b083-1df5df078f1c","Type":"ContainerDied","Data":"a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39"} Mar 09 16:28:00.173190 master-0 kubenswrapper[7604]: I0309 16:28:00.173058 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39" Mar 09 16:28:00.578147 master-0 kubenswrapper[7604]: I0309 16:28:00.578100 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-g8n5t"] Mar 09 16:28:00.586238 master-0 kubenswrapper[7604]: W0309 16:28:00.586192 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd3c489_427c_4a47_b7b9_5d1611b9be12.slice/crio-565d53795593613b69876bde417beb025da29d5e3368eb375d8d27d674214719 WatchSource:0}: Error finding container 565d53795593613b69876bde417beb025da29d5e3368eb375d8d27d674214719: Status 404 returned error can't find the container with id 565d53795593613b69876bde417beb025da29d5e3368eb375d8d27d674214719 Mar 09 16:28:00.592074 master-0 kubenswrapper[7604]: I0309 16:28:00.591980 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd"] Mar 09 16:28:00.600415 master-0 kubenswrapper[7604]: W0309 16:28:00.600105 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe86c85d_59b1_4279_8253_a998ca16cd4d.slice/crio-6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6 WatchSource:0}: Error finding container 6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6: Status 404 returned error can't find the container with id 6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6 Mar 09 16:28:00.611628 master-0 kubenswrapper[7604]: I0309 16:28:00.611584 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv"] Mar 09 16:28:00.623585 master-0 kubenswrapper[7604]: W0309 16:28:00.623517 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf965b971_7e9a_4513_8450_b2b527609bd6.slice/crio-0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344 WatchSource:0}: Error finding container 0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344: Status 404 returned error can't find the container with id 0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344 Mar 09 16:28:01.179519 master-0 kubenswrapper[7604]: I0309 16:28:01.179434 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" event={"ID":"be86c85d-59b1-4279-8253-a998ca16cd4d","Type":"ContainerStarted","Data":"6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6"} Mar 09 16:28:01.181247 master-0 kubenswrapper[7604]: I0309 16:28:01.181198 7604 generic.go:334] "Generic (PLEG): container finished" podID="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" containerID="ed8140bb922b35373782d1b39705b1d6200c0f0fb01785807a86c3fad481d2c8" exitCode=0 Mar 09 16:28:01.181381 master-0 kubenswrapper[7604]: I0309 16:28:01.181308 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" event={"ID":"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a","Type":"ContainerDied","Data":"ed8140bb922b35373782d1b39705b1d6200c0f0fb01785807a86c3fad481d2c8"} Mar 09 16:28:01.181957 master-0 kubenswrapper[7604]: I0309 16:28:01.181920 7604 scope.go:117] "RemoveContainer" containerID="ed8140bb922b35373782d1b39705b1d6200c0f0fb01785807a86c3fad481d2c8" Mar 09 16:28:01.187551 master-0 kubenswrapper[7604]: I0309 16:28:01.187385 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" event={"ID":"f965b971-7e9a-4513-8450-b2b527609bd6","Type":"ContainerStarted","Data":"4c30aea0120c55fa556da695b0c4d2181693e2addc82b1aad8161f8f3a386f19"} Mar 09 16:28:01.187551 master-0 kubenswrapper[7604]: I0309 16:28:01.187457 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" event={"ID":"f965b971-7e9a-4513-8450-b2b527609bd6","Type":"ContainerStarted","Data":"0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344"} Mar 09 16:28:01.189081 master-0 kubenswrapper[7604]: I0309 16:28:01.189003 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" event={"ID":"4bd3c489-427c-4a47-b7b9-5d1611b9be12","Type":"ContainerStarted","Data":"565d53795593613b69876bde417beb025da29d5e3368eb375d8d27d674214719"} Mar 09 16:28:01.191474 master-0 kubenswrapper[7604]: I0309 16:28:01.191412 7604 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="08a50d026ef459ff3233ee74fc8df1d0208854ef10f3f9cdd3c02dba9aa4e4f2" exitCode=0 Mar 09 16:28:01.191553 master-0 kubenswrapper[7604]: I0309 16:28:01.191477 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerDied","Data":"08a50d026ef459ff3233ee74fc8df1d0208854ef10f3f9cdd3c02dba9aa4e4f2"} Mar 09 16:28:01.192122 master-0 kubenswrapper[7604]: I0309 16:28:01.192070 7604 scope.go:117] "RemoveContainer" containerID="08a50d026ef459ff3233ee74fc8df1d0208854ef10f3f9cdd3c02dba9aa4e4f2" Mar 09 16:28:01.204668 master-0 kubenswrapper[7604]: I0309 16:28:01.204557 7604 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-k7rrt container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 09 16:28:01.204668 master-0 kubenswrapper[7604]: I0309 16:28:01.204611 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" podUID="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 09 16:28:01.424913 master-0 kubenswrapper[7604]: I0309 16:28:01.424855 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:28:01.722728 master-0 kubenswrapper[7604]: I0309 16:28:01.722660 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:28:02.110246 master-0 kubenswrapper[7604]: I0309 16:28:02.110096 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:28:02.110246 master-0 kubenswrapper[7604]: I0309 16:28:02.110139 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:28:02.110246 master-0 kubenswrapper[7604]: I0309 16:28:02.110184 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:28:02.110246 master-0 kubenswrapper[7604]: I0309 16:28:02.110188 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:28:02.110578 master-0 kubenswrapper[7604]: I0309 16:28:02.110558 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:28:02.110741 master-0 kubenswrapper[7604]: I0309 16:28:02.110723 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:28:02.110794 master-0 kubenswrapper[7604]: I0309 16:28:02.110782 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:28:02.110830 master-0 kubenswrapper[7604]: I0309 16:28:02.110802 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:28:02.199287 master-0 kubenswrapper[7604]: I0309 16:28:02.199175 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerStarted","Data":"97231a996b3f971d2df45300f8add68d0e10efa9719fb86375b4c767d77ae7f2"} Mar 09 16:28:02.199913 master-0 kubenswrapper[7604]: I0309 16:28:02.199359 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:28:02.201671 master-0 kubenswrapper[7604]: I0309 16:28:02.201481 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" event={"ID":"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a","Type":"ContainerStarted","Data":"6e9c4ef8e54a1ddaaeace68d16cbf279e55f0b1084e638b1cbf0208c30f75c2d"} Mar 09 16:28:02.204224 master-0 kubenswrapper[7604]: I0309 16:28:02.204199 7604 generic.go:334] "Generic (PLEG): container finished" podID="3a612208-f777-486f-9dde-048b2d898c7f" containerID="a68cd08d6d3f33869738052123770a9d77db899c72df9e881a8184753514b484" exitCode=0 Mar 09 16:28:02.204290 master-0 kubenswrapper[7604]: I0309 16:28:02.204266 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" event={"ID":"3a612208-f777-486f-9dde-048b2d898c7f","Type":"ContainerDied","Data":"a68cd08d6d3f33869738052123770a9d77db899c72df9e881a8184753514b484"} Mar 09 16:28:02.204675 master-0 kubenswrapper[7604]: I0309 16:28:02.204654 7604 scope.go:117] "RemoveContainer" containerID="a68cd08d6d3f33869738052123770a9d77db899c72df9e881a8184753514b484" Mar 09 16:28:02.206238 master-0 kubenswrapper[7604]: I0309 16:28:02.206200 7604 generic.go:334] "Generic (PLEG): container finished" podID="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" containerID="8fc1f9c122b644d42570f9573ceb86c8b66b157aee149e8b75a17dc9c0fc5570" exitCode=0 Mar 09 16:28:02.206292 master-0 kubenswrapper[7604]: I0309 16:28:02.206243 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" event={"ID":"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d","Type":"ContainerDied","Data":"8fc1f9c122b644d42570f9573ceb86c8b66b157aee149e8b75a17dc9c0fc5570"} Mar 09 16:28:02.206562 master-0 kubenswrapper[7604]: I0309 16:28:02.206541 7604 scope.go:117] "RemoveContainer" containerID="8fc1f9c122b644d42570f9573ceb86c8b66b157aee149e8b75a17dc9c0fc5570" Mar 09 16:28:03.111167 master-0 kubenswrapper[7604]: I0309 16:28:03.111095 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:28:03.111167 master-0 kubenswrapper[7604]: I0309 16:28:03.111095 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:28:03.117636 master-0 kubenswrapper[7604]: I0309 16:28:03.116822 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:28:03.117636 master-0 kubenswrapper[7604]: I0309 16:28:03.117178 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:28:03.117636 master-0 kubenswrapper[7604]: I0309 16:28:03.117476 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:28:03.118039 master-0 kubenswrapper[7604]: I0309 16:28:03.117952 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:28:03.272767 master-0 kubenswrapper[7604]: I0309 16:28:03.272693 7604 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="20c3af1506f68ad55d72af72ba11892a7b1fbea246aad319e67c6ab36a77fae2" exitCode=0 Mar 09 16:28:03.273378 master-0 kubenswrapper[7604]: I0309 16:28:03.272811 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerDied","Data":"20c3af1506f68ad55d72af72ba11892a7b1fbea246aad319e67c6ab36a77fae2"} Mar 09 16:28:03.273378 master-0 kubenswrapper[7604]: I0309 16:28:03.273342 7604 scope.go:117] "RemoveContainer" containerID="20c3af1506f68ad55d72af72ba11892a7b1fbea246aad319e67c6ab36a77fae2" Mar 09 16:28:03.295959 master-0 kubenswrapper[7604]: I0309 16:28:03.295201 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" event={"ID":"3a612208-f777-486f-9dde-048b2d898c7f","Type":"ContainerStarted","Data":"7559e3794c2b375f42338baad89cc8a6296746d7de572bec45d4f7ebb08433c6"} Mar 09 16:28:03.298222 master-0 kubenswrapper[7604]: I0309 16:28:03.298104 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" event={"ID":"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d","Type":"ContainerStarted","Data":"5a53068d3aa0add7405bb4afae02f9c31d2802806c126fb434c8dcf05fc615e2"} Mar 09 16:28:03.543174 master-0 kubenswrapper[7604]: I0309 16:28:03.541839 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4"] Mar 09 16:28:03.683632 master-0 kubenswrapper[7604]: I0309 16:28:03.683534 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n7slb"] Mar 09 16:28:03.683632 master-0 kubenswrapper[7604]: I0309 16:28:03.683599 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-xtmhw"] Mar 09 16:28:03.683632 master-0 kubenswrapper[7604]: I0309 16:28:03.683635 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9"] Mar 09 16:28:03.690100 master-0 kubenswrapper[7604]: W0309 16:28:03.690062 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod004d1e93_2345_4e62_902c_33f9dbb0f397.slice/crio-e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf WatchSource:0}: Error finding container e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf: Status 404 returned error can't find the container with id e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf Mar 09 16:28:03.692710 master-0 kubenswrapper[7604]: W0309 16:28:03.692550 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf606b775_bf22_4d64_abb4_8e0e24ddb5cd.slice/crio-68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f WatchSource:0}: Error finding container 68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f: Status 404 returned error can't find the container with id 68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f Mar 09 16:28:03.696359 master-0 kubenswrapper[7604]: W0309 16:28:03.696326 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef122f26_bfae_44d2_a70a_8507b3b47332.slice/crio-53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134 WatchSource:0}: Error finding container 53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134: Status 404 returned error can't find the container with id 53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134 Mar 09 16:28:03.723770 master-0 kubenswrapper[7604]: I0309 16:28:03.723715 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5"] Mar 09 16:28:03.725549 master-0 kubenswrapper[7604]: W0309 16:28:03.725482 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72739f4d_da25_493b_91ef_d2b64e9297dd.slice/crio-9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47 WatchSource:0}: Error finding container 9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47: Status 404 returned error can't find the container with id 9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47 Mar 09 16:28:03.727903 master-0 kubenswrapper[7604]: I0309 16:28:03.727863 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-6sknh"] Mar 09 16:28:03.729294 master-0 kubenswrapper[7604]: W0309 16:28:03.729086 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e765395_7c6b_4cba_9a5a_37ba888722bb.slice/crio-7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b WatchSource:0}: Error finding container 7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b: Status 404 returned error can't find the container with id 7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b Mar 09 16:28:03.751203 master-0 kubenswrapper[7604]: I0309 16:28:03.751170 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf"] Mar 09 16:28:03.762265 master-0 kubenswrapper[7604]: W0309 16:28:03.762189 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa7f88a3_9845_49a3_a108_d524df592961.slice/crio-d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029 WatchSource:0}: Error finding container d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029: Status 404 returned error can't find the container with id d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029 Mar 09 16:28:04.310705 master-0 kubenswrapper[7604]: I0309 16:28:04.310632 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerStarted","Data":"0c53bd04ab08a6dcf8bec8933ab495e84121056b0c52db4cc518d1487933ea5c"} Mar 09 16:28:04.324511 master-0 kubenswrapper[7604]: I0309 16:28:04.324071 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f"} Mar 09 16:28:04.338393 master-0 kubenswrapper[7604]: I0309 16:28:04.338232 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" event={"ID":"2e765395-7c6b-4cba-9a5a-37ba888722bb","Type":"ContainerStarted","Data":"7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b"} Mar 09 16:28:04.344301 master-0 kubenswrapper[7604]: I0309 16:28:04.344239 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" event={"ID":"4bd3c489-427c-4a47-b7b9-5d1611b9be12","Type":"ContainerStarted","Data":"936e54f2dcd8b97ec29ef8044719dc7e3e661dccc2b4396664320d24598d2652"} Mar 09 16:28:04.345654 master-0 kubenswrapper[7604]: I0309 16:28:04.345608 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" event={"ID":"4bd3c489-427c-4a47-b7b9-5d1611b9be12","Type":"ContainerStarted","Data":"2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0"} Mar 09 16:28:04.350007 master-0 kubenswrapper[7604]: I0309 16:28:04.349932 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" event={"ID":"5b9030c9-7f5f-4e54-ae93-140469e3558b","Type":"ContainerStarted","Data":"0f0a39d805a27ae6402fcdfc0601eab19733f53f21a52d2a798a59ad90607729"} Mar 09 16:28:04.359386 master-0 kubenswrapper[7604]: I0309 16:28:04.359260 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" event={"ID":"72739f4d-da25-493b-91ef-d2b64e9297dd","Type":"ContainerStarted","Data":"9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47"} Mar 09 16:28:04.361921 master-0 kubenswrapper[7604]: I0309 16:28:04.361847 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" event={"ID":"004d1e93-2345-4e62-902c-33f9dbb0f397","Type":"ContainerStarted","Data":"e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf"} Mar 09 16:28:04.364571 master-0 kubenswrapper[7604]: I0309 16:28:04.364533 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n7slb" event={"ID":"ef122f26-bfae-44d2-a70a-8507b3b47332","Type":"ContainerStarted","Data":"53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134"} Mar 09 16:28:04.369337 master-0 kubenswrapper[7604]: I0309 16:28:04.369245 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerStarted","Data":"d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029"} Mar 09 16:28:04.429089 master-0 kubenswrapper[7604]: I0309 16:28:04.429028 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:04.429369 master-0 kubenswrapper[7604]: I0309 16:28:04.429110 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:04.723212 master-0 kubenswrapper[7604]: I0309 16:28:04.723030 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:04.723212 master-0 kubenswrapper[7604]: I0309 16:28:04.723111 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:04.791663 master-0 kubenswrapper[7604]: I0309 16:28:04.791586 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 09 16:28:04.816881 master-0 kubenswrapper[7604]: I0309 16:28:04.816757 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 09 16:28:05.057655 master-0 kubenswrapper[7604]: I0309 16:28:05.057485 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:28:05.062169 master-0 kubenswrapper[7604]: I0309 16:28:05.062109 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:28:05.382540 master-0 kubenswrapper[7604]: I0309 16:28:05.382318 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:28:05.388307 master-0 kubenswrapper[7604]: I0309 16:28:05.388254 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:28:05.396717 master-0 kubenswrapper[7604]: I0309 16:28:05.396476 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 09 16:28:07.425013 master-0 kubenswrapper[7604]: I0309 16:28:07.424959 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:07.425553 master-0 kubenswrapper[7604]: I0309 16:28:07.425037 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:07.722773 master-0 kubenswrapper[7604]: I0309 16:28:07.722727 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:07.723063 master-0 kubenswrapper[7604]: I0309 16:28:07.722777 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:10.425662 master-0 kubenswrapper[7604]: I0309 16:28:10.425557 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:10.426393 master-0 kubenswrapper[7604]: I0309 16:28:10.425687 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:10.722934 master-0 kubenswrapper[7604]: I0309 16:28:10.722833 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:10.722934 master-0 kubenswrapper[7604]: I0309 16:28:10.722914 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:10.723280 master-0 kubenswrapper[7604]: I0309 16:28:10.722980 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:28:10.723939 master-0 kubenswrapper[7604]: I0309 16:28:10.723880 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"97231a996b3f971d2df45300f8add68d0e10efa9719fb86375b4c767d77ae7f2"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 09 16:28:10.724051 master-0 kubenswrapper[7604]: I0309 16:28:10.723940 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" containerID="cri-o://97231a996b3f971d2df45300f8add68d0e10efa9719fb86375b4c767d77ae7f2" gracePeriod=30 Mar 09 16:28:10.725841 master-0 kubenswrapper[7604]: I0309 16:28:10.725721 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:10.725955 master-0 kubenswrapper[7604]: I0309 16:28:10.725890 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:13.424876 master-0 kubenswrapper[7604]: I0309 16:28:13.424743 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:13.424876 master-0 kubenswrapper[7604]: I0309 16:28:13.424834 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:16.425582 master-0 kubenswrapper[7604]: I0309 16:28:16.425499 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:16.426191 master-0 kubenswrapper[7604]: I0309 16:28:16.425607 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:19.424780 master-0 kubenswrapper[7604]: I0309 16:28:19.424688 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:19.425522 master-0 kubenswrapper[7604]: I0309 16:28:19.424779 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:22.424721 master-0 kubenswrapper[7604]: I0309 16:28:22.424666 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:22.425362 master-0 kubenswrapper[7604]: I0309 16:28:22.424748 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:24.500103 master-0 kubenswrapper[7604]: I0309 16:28:24.500035 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" event={"ID":"5b9030c9-7f5f-4e54-ae93-140469e3558b","Type":"ContainerStarted","Data":"66330a4bd334b8d1827e4db59cc4dd96a4c0efbd28a98ca757e4b3ea6788abd7"} Mar 09 16:28:24.501918 master-0 kubenswrapper[7604]: I0309 16:28:24.501877 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:28:24.502295 master-0 kubenswrapper[7604]: I0309 16:28:24.502265 7604 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-vh6m4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Mar 09 16:28:24.502357 master-0 kubenswrapper[7604]: I0309 16:28:24.502306 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" podUID="5b9030c9-7f5f-4e54-ae93-140469e3558b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Mar 09 16:28:24.504588 master-0 kubenswrapper[7604]: I0309 16:28:24.504172 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" event={"ID":"004d1e93-2345-4e62-902c-33f9dbb0f397","Type":"ContainerStarted","Data":"ae7cfaea4118a54fb1bd46dbd238bb4e3f58f097bb6540a050115688a5aeb38c"} Mar 09 16:28:24.532651 master-0 kubenswrapper[7604]: I0309 16:28:24.532597 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-xzwh9_457f42a7-f14c-4d61-a87a-bc1ed422feed/openshift-config-operator/1.log" Mar 09 16:28:24.564597 master-0 kubenswrapper[7604]: I0309 16:28:24.564539 7604 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="97231a996b3f971d2df45300f8add68d0e10efa9719fb86375b4c767d77ae7f2" exitCode=255 Mar 09 16:28:24.564713 master-0 kubenswrapper[7604]: I0309 16:28:24.564687 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerDied","Data":"97231a996b3f971d2df45300f8add68d0e10efa9719fb86375b4c767d77ae7f2"} Mar 09 16:28:24.564753 master-0 kubenswrapper[7604]: I0309 16:28:24.564723 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerStarted","Data":"51cc97980a013ef784c30d027db741202e1e61692ca828907c9b9adb40652a56"} Mar 09 16:28:24.564753 master-0 kubenswrapper[7604]: I0309 16:28:24.564746 7604 scope.go:117] "RemoveContainer" containerID="08a50d026ef459ff3233ee74fc8df1d0208854ef10f3f9cdd3c02dba9aa4e4f2" Mar 09 16:28:24.565432 master-0 kubenswrapper[7604]: I0309 16:28:24.565359 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:28:24.574027 master-0 kubenswrapper[7604]: I0309 16:28:24.573951 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"f908a12ac71e2212454263bad6748c946abbe3337853638f948a9c8e648cf7ad"} Mar 09 16:28:24.584883 master-0 kubenswrapper[7604]: I0309 16:28:24.584826 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" event={"ID":"be86c85d-59b1-4279-8253-a998ca16cd4d","Type":"ContainerStarted","Data":"0289bb8850bfa0d4badae9ffbadd05333a42159b7ae31260554147ca4b1c8613"} Mar 09 16:28:24.585965 master-0 kubenswrapper[7604]: I0309 16:28:24.585844 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:28:24.594538 master-0 kubenswrapper[7604]: I0309 16:28:24.594407 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" event={"ID":"2e765395-7c6b-4cba-9a5a-37ba888722bb","Type":"ContainerStarted","Data":"1765d222fa51dc975cebdd1bdcaa4ce3c6b31334b8d1330af7de3940a2e5ca59"} Mar 09 16:28:24.597817 master-0 kubenswrapper[7604]: I0309 16:28:24.597763 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:28:24.601642 master-0 kubenswrapper[7604]: I0309 16:28:24.601390 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerStarted","Data":"278bf8d564b99ffbc1bdf2e9dfb1775c6df05e5a36ea800f1147b28f2ca15f64"} Mar 09 16:28:24.601642 master-0 kubenswrapper[7604]: I0309 16:28:24.601481 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerStarted","Data":"5d27613e5c07fed41355caf36a7da682d5655bd692c9fefa2418bf264de4dc45"} Mar 09 16:28:24.611569 master-0 kubenswrapper[7604]: I0309 16:28:24.608764 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n7slb" event={"ID":"ef122f26-bfae-44d2-a70a-8507b3b47332","Type":"ContainerStarted","Data":"44aeada53f1ce88ccbc8d0e871a8db6079ea36fb8294c26ac923aff0686187cd"} Mar 09 16:28:24.611569 master-0 kubenswrapper[7604]: I0309 16:28:24.610045 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" event={"ID":"72739f4d-da25-493b-91ef-d2b64e9297dd","Type":"ContainerStarted","Data":"9e01bf2cdccd00f5d0c7f8e7f1e547832c38486ebfbf1eb2482e4bd19bdf90d9"} Mar 09 16:28:24.614253 master-0 kubenswrapper[7604]: I0309 16:28:24.614206 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" event={"ID":"d15da434-241d-4a93-9ce3-f943d43bf2ce","Type":"ContainerStarted","Data":"131328b43b72ff6c1df11ce3ec4e469cbdaf3cf6fbeffac273e7f53e85c3be7d"} Mar 09 16:28:24.615506 master-0 kubenswrapper[7604]: I0309 16:28:24.615475 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:28:24.621219 master-0 kubenswrapper[7604]: I0309 16:28:24.620978 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:28:24.624002 master-0 kubenswrapper[7604]: I0309 16:28:24.623466 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" event={"ID":"f965b971-7e9a-4513-8450-b2b527609bd6","Type":"ContainerStarted","Data":"6d5f471d38ab26de2789bb7383ccfd1af1a0996fc7de4e1ac556541f152b9d74"} Mar 09 16:28:24.624002 master-0 kubenswrapper[7604]: I0309 16:28:24.623650 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:28:25.630263 master-0 kubenswrapper[7604]: I0309 16:28:25.630050 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" event={"ID":"72739f4d-da25-493b-91ef-d2b64e9297dd","Type":"ContainerStarted","Data":"433aaf0764cc4df536e818048a006c3de8a5c316b1cb969d24ac5fe651fdc642"} Mar 09 16:28:25.632092 master-0 kubenswrapper[7604]: I0309 16:28:25.632036 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n7slb" event={"ID":"ef122f26-bfae-44d2-a70a-8507b3b47332","Type":"ContainerStarted","Data":"538331a5ff1844b1e68747578b99a250283a532b392600070e8f9b22cc0cbe1f"} Mar 09 16:28:25.634060 master-0 kubenswrapper[7604]: I0309 16:28:25.634013 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-xzwh9_457f42a7-f14c-4d61-a87a-bc1ed422feed/openshift-config-operator/1.log" Mar 09 16:28:25.636395 master-0 kubenswrapper[7604]: I0309 16:28:25.636336 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"074be854d2394762790ecda23ce76fab41002fcee7566bde64a1163603c1915d"} Mar 09 16:28:25.640652 master-0 kubenswrapper[7604]: I0309 16:28:25.640597 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:28:28.215154 master-0 kubenswrapper[7604]: I0309 16:28:28.215068 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sj6x9"] Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: E0309 16:28:28.215297 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103a81df-6dfb-42d3-bc03-4391681c3e35" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215317 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="103a81df-6dfb-42d3-bc03-4391681c3e35" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: E0309 16:28:28.215337 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215345 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: E0309 16:28:28.215357 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215365 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: E0309 16:28:28.215375 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215382 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: E0309 16:28:28.215392 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215399 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215526 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215539 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215554 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215566 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.215577 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="103a81df-6dfb-42d3-bc03-4391681c3e35" containerName="installer" Mar 09 16:28:28.217206 master-0 kubenswrapper[7604]: I0309 16:28:28.216124 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.218316 master-0 kubenswrapper[7604]: I0309 16:28:28.218293 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 09 16:28:28.218750 master-0 kubenswrapper[7604]: I0309 16:28:28.218736 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-fhhfg" Mar 09 16:28:28.218909 master-0 kubenswrapper[7604]: I0309 16:28:28.218750 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 09 16:28:28.218973 master-0 kubenswrapper[7604]: I0309 16:28:28.218739 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 09 16:28:28.220701 master-0 kubenswrapper[7604]: I0309 16:28:28.220660 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 09 16:28:28.222057 master-0 kubenswrapper[7604]: I0309 16:28:28.222025 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5587e967-124e-4f2a-b7fb-42cb16bfc337-config-volume\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.222141 master-0 kubenswrapper[7604]: I0309 16:28:28.222074 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.222216 master-0 kubenswrapper[7604]: I0309 16:28:28.222178 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzfq\" (UniqueName: \"kubernetes.io/projected/5587e967-124e-4f2a-b7fb-42cb16bfc337-kube-api-access-4dzfq\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.224682 master-0 kubenswrapper[7604]: I0309 16:28:28.224631 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sj6x9"] Mar 09 16:28:28.322747 master-0 kubenswrapper[7604]: I0309 16:28:28.322700 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzfq\" (UniqueName: \"kubernetes.io/projected/5587e967-124e-4f2a-b7fb-42cb16bfc337-kube-api-access-4dzfq\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.323070 master-0 kubenswrapper[7604]: I0309 16:28:28.323049 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5587e967-124e-4f2a-b7fb-42cb16bfc337-config-volume\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.323205 master-0 kubenswrapper[7604]: I0309 16:28:28.323188 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.323359 master-0 kubenswrapper[7604]: E0309 16:28:28.323320 7604 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 09 16:28:28.323460 master-0 kubenswrapper[7604]: E0309 16:28:28.323386 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls podName:5587e967-124e-4f2a-b7fb-42cb16bfc337 nodeName:}" failed. No retries permitted until 2026-03-09 16:28:28.823367829 +0000 UTC m=+165.877337252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls") pod "dns-default-sj6x9" (UID: "5587e967-124e-4f2a-b7fb-42cb16bfc337") : secret "dns-default-metrics-tls" not found Mar 09 16:28:28.324252 master-0 kubenswrapper[7604]: I0309 16:28:28.324198 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5587e967-124e-4f2a-b7fb-42cb16bfc337-config-volume\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.339907 master-0 kubenswrapper[7604]: I0309 16:28:28.339862 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzfq\" (UniqueName: \"kubernetes.io/projected/5587e967-124e-4f2a-b7fb-42cb16bfc337-kube-api-access-4dzfq\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.424714 master-0 kubenswrapper[7604]: I0309 16:28:28.424646 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:28.425173 master-0 kubenswrapper[7604]: I0309 16:28:28.425135 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:28.722578 master-0 kubenswrapper[7604]: I0309 16:28:28.722519 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:28.722802 master-0 kubenswrapper[7604]: I0309 16:28:28.722591 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:28.827462 master-0 kubenswrapper[7604]: I0309 16:28:28.827354 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:28.827678 master-0 kubenswrapper[7604]: E0309 16:28:28.827558 7604 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 09 16:28:28.827678 master-0 kubenswrapper[7604]: E0309 16:28:28.827623 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls podName:5587e967-124e-4f2a-b7fb-42cb16bfc337 nodeName:}" failed. No retries permitted until 2026-03-09 16:28:29.827604976 +0000 UTC m=+166.881574399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls") pod "dns-default-sj6x9" (UID: "5587e967-124e-4f2a-b7fb-42cb16bfc337") : secret "dns-default-metrics-tls" not found Mar 09 16:28:28.855734 master-0 kubenswrapper[7604]: I0309 16:28:28.855682 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zrqjw"] Mar 09 16:28:28.856775 master-0 kubenswrapper[7604]: I0309 16:28:28.856752 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:28.859128 master-0 kubenswrapper[7604]: I0309 16:28:28.859072 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-ccfvc" Mar 09 16:28:28.868581 master-0 kubenswrapper[7604]: I0309 16:28:28.868534 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrqjw"] Mar 09 16:28:28.931460 master-0 kubenswrapper[7604]: I0309 16:28:28.931383 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9cq\" (UniqueName: \"kubernetes.io/projected/aec186fc-aead-47fb-a7e1-8c9325897c47-kube-api-access-vj9cq\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:28.931460 master-0 kubenswrapper[7604]: I0309 16:28:28.931469 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-catalog-content\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:28.931711 master-0 kubenswrapper[7604]: I0309 16:28:28.931497 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-utilities\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.032193 master-0 kubenswrapper[7604]: I0309 16:28:29.032045 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9cq\" (UniqueName: \"kubernetes.io/projected/aec186fc-aead-47fb-a7e1-8c9325897c47-kube-api-access-vj9cq\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.032193 master-0 kubenswrapper[7604]: I0309 16:28:29.032106 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-catalog-content\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.032193 master-0 kubenswrapper[7604]: I0309 16:28:29.032123 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-utilities\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.032873 master-0 kubenswrapper[7604]: I0309 16:28:29.032836 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-utilities\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.032982 master-0 kubenswrapper[7604]: I0309 16:28:29.032950 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-catalog-content\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.044581 master-0 kubenswrapper[7604]: I0309 16:28:29.044520 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8gkw8"] Mar 09 16:28:29.045539 master-0 kubenswrapper[7604]: I0309 16:28:29.045510 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.047363 master-0 kubenswrapper[7604]: W0309 16:28:29.047313 7604 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-q2k6n": failed to list *v1.Secret: secrets "certified-operators-dockercfg-q2k6n" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'master-0' and this object Mar 09 16:28:29.047363 master-0 kubenswrapper[7604]: E0309 16:28:29.047360 7604 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-q2k6n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-q2k6n\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 09 16:28:29.054449 master-0 kubenswrapper[7604]: I0309 16:28:29.054388 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9cq\" (UniqueName: \"kubernetes.io/projected/aec186fc-aead-47fb-a7e1-8c9325897c47-kube-api-access-vj9cq\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.062363 master-0 kubenswrapper[7604]: I0309 16:28:29.062316 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gkw8"] Mar 09 16:28:29.133750 master-0 kubenswrapper[7604]: I0309 16:28:29.133683 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-utilities\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.133750 master-0 kubenswrapper[7604]: I0309 16:28:29.133756 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-catalog-content\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.134046 master-0 kubenswrapper[7604]: I0309 16:28:29.133873 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgj24\" (UniqueName: \"kubernetes.io/projected/3745c679-2ea9-4382-9270-4d3fbbaaf296-kube-api-access-jgj24\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.175963 master-0 kubenswrapper[7604]: I0309 16:28:29.175883 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:29.235257 master-0 kubenswrapper[7604]: I0309 16:28:29.235184 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgj24\" (UniqueName: \"kubernetes.io/projected/3745c679-2ea9-4382-9270-4d3fbbaaf296-kube-api-access-jgj24\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.235773 master-0 kubenswrapper[7604]: I0309 16:28:29.235264 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-utilities\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.235773 master-0 kubenswrapper[7604]: I0309 16:28:29.235292 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-catalog-content\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.236677 master-0 kubenswrapper[7604]: I0309 16:28:29.235988 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-utilities\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.236677 master-0 kubenswrapper[7604]: I0309 16:28:29.236274 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-catalog-content\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.255237 master-0 kubenswrapper[7604]: I0309 16:28:29.255025 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgj24\" (UniqueName: \"kubernetes.io/projected/3745c679-2ea9-4382-9270-4d3fbbaaf296-kube-api-access-jgj24\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:29.565466 master-0 kubenswrapper[7604]: I0309 16:28:29.565406 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrqjw"] Mar 09 16:28:29.569126 master-0 kubenswrapper[7604]: W0309 16:28:29.568584 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec186fc_aead_47fb_a7e1_8c9325897c47.slice/crio-1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea WatchSource:0}: Error finding container 1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea: Status 404 returned error can't find the container with id 1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea Mar 09 16:28:29.657018 master-0 kubenswrapper[7604]: I0309 16:28:29.656977 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqjw" event={"ID":"aec186fc-aead-47fb-a7e1-8c9325897c47","Type":"ContainerStarted","Data":"1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea"} Mar 09 16:28:29.841024 master-0 kubenswrapper[7604]: I0309 16:28:29.840865 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:29.841224 master-0 kubenswrapper[7604]: E0309 16:28:29.841054 7604 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 09 16:28:29.841224 master-0 kubenswrapper[7604]: E0309 16:28:29.841163 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls podName:5587e967-124e-4f2a-b7fb-42cb16bfc337 nodeName:}" failed. No retries permitted until 2026-03-09 16:28:31.8411315 +0000 UTC m=+168.895100923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls") pod "dns-default-sj6x9" (UID: "5587e967-124e-4f2a-b7fb-42cb16bfc337") : secret "dns-default-metrics-tls" not found Mar 09 16:28:29.931793 master-0 kubenswrapper[7604]: I0309 16:28:29.931730 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-q2k6n" Mar 09 16:28:29.932800 master-0 kubenswrapper[7604]: I0309 16:28:29.932749 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:30.302235 master-0 kubenswrapper[7604]: I0309 16:28:30.302156 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gkw8"] Mar 09 16:28:30.310840 master-0 kubenswrapper[7604]: W0309 16:28:30.310769 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3745c679_2ea9_4382_9270_4d3fbbaaf296.slice/crio-87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d WatchSource:0}: Error finding container 87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d: Status 404 returned error can't find the container with id 87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d Mar 09 16:28:30.665218 master-0 kubenswrapper[7604]: I0309 16:28:30.665097 7604 generic.go:334] "Generic (PLEG): container finished" podID="3745c679-2ea9-4382-9270-4d3fbbaaf296" containerID="38bf4a179e73486d5ae4aba2338c68d5699149ac664abb92d0a252b9049f8f76" exitCode=0 Mar 09 16:28:30.665505 master-0 kubenswrapper[7604]: I0309 16:28:30.665294 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gkw8" event={"ID":"3745c679-2ea9-4382-9270-4d3fbbaaf296","Type":"ContainerDied","Data":"38bf4a179e73486d5ae4aba2338c68d5699149ac664abb92d0a252b9049f8f76"} Mar 09 16:28:30.665630 master-0 kubenswrapper[7604]: I0309 16:28:30.665604 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gkw8" event={"ID":"3745c679-2ea9-4382-9270-4d3fbbaaf296","Type":"ContainerStarted","Data":"87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d"} Mar 09 16:28:30.667289 master-0 kubenswrapper[7604]: I0309 16:28:30.667230 7604 generic.go:334] "Generic (PLEG): container finished" podID="aec186fc-aead-47fb-a7e1-8c9325897c47" containerID="076a8011760cf87704cccc794f400077e346aa9939d01683ec7b3535a6cd3a0f" exitCode=0 Mar 09 16:28:30.667289 master-0 kubenswrapper[7604]: I0309 16:28:30.667280 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqjw" event={"ID":"aec186fc-aead-47fb-a7e1-8c9325897c47","Type":"ContainerDied","Data":"076a8011760cf87704cccc794f400077e346aa9939d01683ec7b3535a6cd3a0f"} Mar 09 16:28:31.424919 master-0 kubenswrapper[7604]: I0309 16:28:31.424863 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:28:31.425514 master-0 kubenswrapper[7604]: I0309 16:28:31.424931 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:28:31.589450 master-0 kubenswrapper[7604]: I0309 16:28:31.586602 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp"] Mar 09 16:28:31.589450 master-0 kubenswrapper[7604]: I0309 16:28:31.587350 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.589794 master-0 kubenswrapper[7604]: I0309 16:28:31.589512 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq"] Mar 09 16:28:31.595405 master-0 kubenswrapper[7604]: I0309 16:28:31.590075 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.596024 master-0 kubenswrapper[7604]: I0309 16:28:31.595822 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kz284" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.596114 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.596261 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.596484 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.596630 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.596777 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.596912 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.598088 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw"] Mar 09 16:28:31.600455 master-0 kubenswrapper[7604]: I0309 16:28:31.598749 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.604479 master-0 kubenswrapper[7604]: I0309 16:28:31.601503 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqzqk" Mar 09 16:28:31.604479 master-0 kubenswrapper[7604]: I0309 16:28:31.602897 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 09 16:28:31.604479 master-0 kubenswrapper[7604]: I0309 16:28:31.603061 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 09 16:28:31.604479 master-0 kubenswrapper[7604]: I0309 16:28:31.603213 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-n686v" Mar 09 16:28:31.604479 master-0 kubenswrapper[7604]: I0309 16:28:31.603433 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 09 16:28:31.614153 master-0 kubenswrapper[7604]: I0309 16:28:31.614083 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp"] Mar 09 16:28:31.617448 master-0 kubenswrapper[7604]: I0309 16:28:31.614763 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.617448 master-0 kubenswrapper[7604]: I0309 16:28:31.616190 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-g4frj" Mar 09 16:28:31.617448 master-0 kubenswrapper[7604]: I0309 16:28:31.616400 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 09 16:28:31.625471 master-0 kubenswrapper[7604]: I0309 16:28:31.620086 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw"] Mar 09 16:28:31.625471 master-0 kubenswrapper[7604]: I0309 16:28:31.623113 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq"] Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.664591 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495rn\" (UniqueName: \"kubernetes.io/projected/357570a4-f69b-4970-9b6f-fe06fc4c2f90-kube-api-access-495rn\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.664663 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/812b321a-943d-4716-a17c-7f805333ef42-machine-approver-tls\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.664737 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpfl\" (UniqueName: \"kubernetes.io/projected/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-kube-api-access-shpfl\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.664772 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.664906 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.665017 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-config\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.665048 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q455j\" (UniqueName: \"kubernetes.io/projected/812b321a-943d-4716-a17c-7f805333ef42-kube-api-access-q455j\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.665072 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.665111 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdb8\" (UniqueName: \"kubernetes.io/projected/34c0b60e-da69-452d-858d-0af77f18946d-kube-api-access-vmdb8\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.667846 master-0 kubenswrapper[7604]: I0309 16:28:31.665159 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-auth-proxy-config\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.691971 master-0 kubenswrapper[7604]: I0309 16:28:31.678714 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp"] Mar 09 16:28:31.710943 master-0 kubenswrapper[7604]: I0309 16:28:31.706166 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp"] Mar 09 16:28:31.733853 master-0 kubenswrapper[7604]: I0309 16:28:31.731665 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-5fjz8"] Mar 09 16:28:31.733853 master-0 kubenswrapper[7604]: I0309 16:28:31.732167 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.735143 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.740392 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.740632 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.740785 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.740988 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-gpmvf" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.741285 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-2l9mk" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.742371 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.742533 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.742703 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 09 16:28:31.743939 master-0 kubenswrapper[7604]: I0309 16:28:31.743314 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 09 16:28:31.753899 master-0 kubenswrapper[7604]: I0309 16:28:31.745032 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 09 16:28:31.753899 master-0 kubenswrapper[7604]: I0309 16:28:31.748545 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq"] Mar 09 16:28:31.753899 master-0 kubenswrapper[7604]: I0309 16:28:31.749743 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.753899 master-0 kubenswrapper[7604]: I0309 16:28:31.751798 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.754875 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-gkx8f" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.754998 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.755151 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.755361 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.755498 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.756880 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 09 16:28:31.760760 master-0 kubenswrapper[7604]: I0309 16:28:31.759671 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765579 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-config\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765615 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6b4992e-50f3-473c-aa83-ed35569ba307-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765632 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d65ba99c-ecce-4678-a7dd-457638fb2829-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765647 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8be2517a-6f28-4289-a108-6e3345a1e246-snapshots\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765668 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q455j\" (UniqueName: \"kubernetes.io/projected/812b321a-943d-4716-a17c-7f805333ef42-kube-api-access-q455j\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765687 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765706 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765723 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765745 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdb8\" (UniqueName: \"kubernetes.io/projected/34c0b60e-da69-452d-858d-0af77f18946d-kube-api-access-vmdb8\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765763 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-auth-proxy-config\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765783 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9fx\" (UniqueName: \"kubernetes.io/projected/8be2517a-6f28-4289-a108-6e3345a1e246-kube-api-access-hh9fx\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765808 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495rn\" (UniqueName: \"kubernetes.io/projected/357570a4-f69b-4970-9b6f-fe06fc4c2f90-kube-api-access-495rn\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765822 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/812b321a-943d-4716-a17c-7f805333ef42-machine-approver-tls\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765859 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpfl\" (UniqueName: \"kubernetes.io/projected/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-kube-api-access-shpfl\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765876 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765894 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzzg\" (UniqueName: \"kubernetes.io/projected/d6b4992e-50f3-473c-aa83-ed35569ba307-kube-api-access-bhzzg\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765910 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765927 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765953 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p62v\" (UniqueName: \"kubernetes.io/projected/d65ba99c-ecce-4678-a7dd-457638fb2829-kube-api-access-5p62v\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.765998 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.766016 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.766032 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d65ba99c-ecce-4678-a7dd-457638fb2829-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.766051 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.766069 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.769025 master-0 kubenswrapper[7604]: I0309 16:28:31.766834 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-config\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.769882 master-0 kubenswrapper[7604]: I0309 16:28:31.769222 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-auth-proxy-config\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.786474 master-0 kubenswrapper[7604]: I0309 16:28:31.777912 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.786474 master-0 kubenswrapper[7604]: I0309 16:28:31.778247 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.786474 master-0 kubenswrapper[7604]: I0309 16:28:31.778316 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp"] Mar 09 16:28:31.786474 master-0 kubenswrapper[7604]: I0309 16:28:31.781995 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/812b321a-943d-4716-a17c-7f805333ef42-machine-approver-tls\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.800455 master-0 kubenswrapper[7604]: I0309 16:28:31.787567 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-5fjz8"] Mar 09 16:28:31.800455 master-0 kubenswrapper[7604]: I0309 16:28:31.793952 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.830863 master-0 kubenswrapper[7604]: I0309 16:28:31.829598 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpfl\" (UniqueName: \"kubernetes.io/projected/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-kube-api-access-shpfl\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:31.830863 master-0 kubenswrapper[7604]: I0309 16:28:31.830693 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdb8\" (UniqueName: \"kubernetes.io/projected/34c0b60e-da69-452d-858d-0af77f18946d-kube-api-access-vmdb8\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:31.840790 master-0 kubenswrapper[7604]: I0309 16:28:31.833091 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q455j\" (UniqueName: \"kubernetes.io/projected/812b321a-943d-4716-a17c-7f805333ef42-kube-api-access-q455j\") pod \"machine-approver-955fcfb87-xdqlp\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.850452 master-0 kubenswrapper[7604]: I0309 16:28:31.843956 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495rn\" (UniqueName: \"kubernetes.io/projected/357570a4-f69b-4970-9b6f-fe06fc4c2f90-kube-api-access-495rn\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867645 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9fx\" (UniqueName: \"kubernetes.io/projected/8be2517a-6f28-4289-a108-6e3345a1e246-kube-api-access-hh9fx\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867722 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867781 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzzg\" (UniqueName: \"kubernetes.io/projected/d6b4992e-50f3-473c-aa83-ed35569ba307-kube-api-access-bhzzg\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867804 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867835 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p62v\" (UniqueName: \"kubernetes.io/projected/d65ba99c-ecce-4678-a7dd-457638fb2829-kube-api-access-5p62v\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867866 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867893 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.867919 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d65ba99c-ecce-4678-a7dd-457638fb2829-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869335 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869412 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869459 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6b4992e-50f3-473c-aa83-ed35569ba307-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869482 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d65ba99c-ecce-4678-a7dd-457638fb2829-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869504 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8be2517a-6f28-4289-a108-6e3345a1e246-snapshots\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869538 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.869566 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.870348 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.870498 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d65ba99c-ecce-4678-a7dd-457638fb2829-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.870606 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.872082 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.873464 master-0 kubenswrapper[7604]: I0309 16:28:31.872102 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.876700 master-0 kubenswrapper[7604]: I0309 16:28:31.872825 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.878210 master-0 kubenswrapper[7604]: I0309 16:28:31.878036 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8be2517a-6f28-4289-a108-6e3345a1e246-snapshots\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.878210 master-0 kubenswrapper[7604]: I0309 16:28:31.878107 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d65ba99c-ecce-4678-a7dd-457638fb2829-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.878605 master-0 kubenswrapper[7604]: I0309 16:28:31.878342 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.878940 master-0 kubenswrapper[7604]: E0309 16:28:31.878765 7604 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 09 16:28:31.878940 master-0 kubenswrapper[7604]: E0309 16:28:31.878858 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls podName:5587e967-124e-4f2a-b7fb-42cb16bfc337 nodeName:}" failed. No retries permitted until 2026-03-09 16:28:35.87883219 +0000 UTC m=+172.932801693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls") pod "dns-default-sj6x9" (UID: "5587e967-124e-4f2a-b7fb-42cb16bfc337") : secret "dns-default-metrics-tls" not found Mar 09 16:28:31.884839 master-0 kubenswrapper[7604]: I0309 16:28:31.884806 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6b4992e-50f3-473c-aa83-ed35569ba307-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.887311 master-0 kubenswrapper[7604]: I0309 16:28:31.887254 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.891819 master-0 kubenswrapper[7604]: I0309 16:28:31.889638 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p62v\" (UniqueName: \"kubernetes.io/projected/d65ba99c-ecce-4678-a7dd-457638fb2829-kube-api-access-5p62v\") pod \"cluster-cloud-controller-manager-operator-559568b945-59pzq\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.894052 master-0 kubenswrapper[7604]: I0309 16:28:31.894003 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzzg\" (UniqueName: \"kubernetes.io/projected/d6b4992e-50f3-473c-aa83-ed35569ba307-kube-api-access-bhzzg\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:31.894127 master-0 kubenswrapper[7604]: I0309 16:28:31.894092 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9fx\" (UniqueName: \"kubernetes.io/projected/8be2517a-6f28-4289-a108-6e3345a1e246-kube-api-access-hh9fx\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:31.906262 master-0 kubenswrapper[7604]: I0309 16:28:31.905785 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:31.954461 master-0 kubenswrapper[7604]: I0309 16:28:31.954084 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:31.980195 master-0 kubenswrapper[7604]: I0309 16:28:31.980104 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:28:32.040459 master-0 kubenswrapper[7604]: I0309 16:28:32.040219 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:28:32.070768 master-0 kubenswrapper[7604]: I0309 16:28:32.070716 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:28:32.161152 master-0 kubenswrapper[7604]: I0309 16:28:32.152985 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:28:32.185136 master-0 kubenswrapper[7604]: I0309 16:28:32.183354 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:28:32.476031 master-0 kubenswrapper[7604]: I0309 16:28:32.475979 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq"] Mar 09 16:28:32.721844 master-0 kubenswrapper[7604]: I0309 16:28:32.721765 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerStarted","Data":"0161f699c4304c5986c2b7c9bed720ea6736224b8c4d779a21133488f92f2331"} Mar 09 16:28:32.723337 master-0 kubenswrapper[7604]: I0309 16:28:32.723286 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" event={"ID":"357570a4-f69b-4970-9b6f-fe06fc4c2f90","Type":"ContainerStarted","Data":"a9b628cdb80b26fca66723feadbd65d1a0479ac8b305d4bb2d0a1150e9146e96"} Mar 09 16:28:32.726117 master-0 kubenswrapper[7604]: I0309 16:28:32.726075 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" event={"ID":"812b321a-943d-4716-a17c-7f805333ef42","Type":"ContainerStarted","Data":"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836"} Mar 09 16:28:32.726192 master-0 kubenswrapper[7604]: I0309 16:28:32.726128 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" event={"ID":"812b321a-943d-4716-a17c-7f805333ef42","Type":"ContainerStarted","Data":"985254158c4efd99b6461a9bedfc34bb053a6c3692cd6787bbf20cc60ab472ae"} Mar 09 16:28:32.765098 master-0 kubenswrapper[7604]: I0309 16:28:32.764960 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-5fjz8"] Mar 09 16:28:32.784580 master-0 kubenswrapper[7604]: W0309 16:28:32.784374 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be2517a_6f28_4289_a108_6e3345a1e246.slice/crio-550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e WatchSource:0}: Error finding container 550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e: Status 404 returned error can't find the container with id 550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e Mar 09 16:28:32.826330 master-0 kubenswrapper[7604]: I0309 16:28:32.823708 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp"] Mar 09 16:28:32.834674 master-0 kubenswrapper[7604]: I0309 16:28:32.834626 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp"] Mar 09 16:28:32.950616 master-0 kubenswrapper[7604]: I0309 16:28:32.950560 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw"] Mar 09 16:28:33.764220 master-0 kubenswrapper[7604]: I0309 16:28:33.756274 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" event={"ID":"631f2bdf-2ed4-4315-98c3-c5a538d0aec3","Type":"ContainerStarted","Data":"5534d85f0a9fe740eb26ccac2e47ce52d44e3f557fa5be108af8630168b4e7ab"} Mar 09 16:28:33.764220 master-0 kubenswrapper[7604]: I0309 16:28:33.758743 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" event={"ID":"34c0b60e-da69-452d-858d-0af77f18946d","Type":"ContainerStarted","Data":"b0a3a4ee0305c897e72b7253be6cebaee1b1c6c54eed95437052e11964c648c2"} Mar 09 16:28:33.764220 master-0 kubenswrapper[7604]: I0309 16:28:33.760535 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" event={"ID":"8be2517a-6f28-4289-a108-6e3345a1e246","Type":"ContainerStarted","Data":"550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e"} Mar 09 16:28:33.799127 master-0 kubenswrapper[7604]: I0309 16:28:33.797068 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" event={"ID":"d6b4992e-50f3-473c-aa83-ed35569ba307","Type":"ContainerStarted","Data":"74df2bec8b010f6db92c49c25a8517473301dd2f91a198e4528489111ed859cc"} Mar 09 16:28:33.799127 master-0 kubenswrapper[7604]: I0309 16:28:33.797117 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" event={"ID":"d6b4992e-50f3-473c-aa83-ed35569ba307","Type":"ContainerStarted","Data":"81a061ad8b3b8276fdddd4547781d1739b9b814b6efb0c8aa846322d762aeea4"} Mar 09 16:28:33.799127 master-0 kubenswrapper[7604]: I0309 16:28:33.797129 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" event={"ID":"d6b4992e-50f3-473c-aa83-ed35569ba307","Type":"ContainerStarted","Data":"29f3efce623abd11180f220d3e9cf221f9f6cf57527de2211126a65b38f4186b"} Mar 09 16:28:33.829442 master-0 kubenswrapper[7604]: I0309 16:28:33.818599 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh"] Mar 09 16:28:33.829442 master-0 kubenswrapper[7604]: I0309 16:28:33.819755 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:33.829442 master-0 kubenswrapper[7604]: I0309 16:28:33.821412 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh"] Mar 09 16:28:33.829442 master-0 kubenswrapper[7604]: I0309 16:28:33.822006 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-hgcd7" Mar 09 16:28:33.829442 master-0 kubenswrapper[7604]: I0309 16:28:33.823742 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 09 16:28:33.829442 master-0 kubenswrapper[7604]: I0309 16:28:33.824199 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.844550 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7"] Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.845737 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.850518 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.850591 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.850758 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-jfns5" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.851579 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.852460 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 09 16:28:33.854023 master-0 kubenswrapper[7604]: I0309 16:28:33.852755 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" podStartSLOduration=2.852624859 podStartE2EDuration="2.852624859s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:28:33.832632041 +0000 UTC m=+170.886601474" watchObservedRunningTime="2026-03-09 16:28:33.852624859 +0000 UTC m=+170.906594302" Mar 09 16:28:33.873235 master-0 kubenswrapper[7604]: I0309 16:28:33.873177 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7"] Mar 09 16:28:33.928549 master-0 kubenswrapper[7604]: I0309 16:28:33.928500 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v"] Mar 09 16:28:33.929726 master-0 kubenswrapper[7604]: I0309 16:28:33.929703 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:33.943229 master-0 kubenswrapper[7604]: I0309 16:28:33.943178 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:33.943229 master-0 kubenswrapper[7604]: I0309 16:28:33.943234 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:33.943229 master-0 kubenswrapper[7604]: I0309 16:28:33.943258 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkxg\" (UniqueName: \"kubernetes.io/projected/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-kube-api-access-4gkxg\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:33.943578 master-0 kubenswrapper[7604]: I0309 16:28:33.943288 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:33.943578 master-0 kubenswrapper[7604]: I0309 16:28:33.943318 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-868cs\" (UniqueName: \"kubernetes.io/projected/a320d845-3a5d-4027-a765-f0b2dc07f9de-kube-api-access-868cs\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:33.943578 master-0 kubenswrapper[7604]: I0309 16:28:33.943383 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:33.943578 master-0 kubenswrapper[7604]: I0309 16:28:33.943380 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-8bw78" Mar 09 16:28:33.943578 master-0 kubenswrapper[7604]: I0309 16:28:33.943493 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 09 16:28:33.945169 master-0 kubenswrapper[7604]: I0309 16:28:33.943869 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 09 16:28:33.945169 master-0 kubenswrapper[7604]: I0309 16:28:33.943994 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 09 16:28:33.958722 master-0 kubenswrapper[7604]: I0309 16:28:33.957156 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v"] Mar 09 16:28:34.049968 master-0 kubenswrapper[7604]: I0309 16:28:34.049853 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcvbf\" (UniqueName: \"kubernetes.io/projected/a6cd9347-eec9-4549-9de4-6033112634ce-kube-api-access-lcvbf\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.050151 master-0 kubenswrapper[7604]: I0309 16:28:34.049917 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-868cs\" (UniqueName: \"kubernetes.io/projected/a320d845-3a5d-4027-a765-f0b2dc07f9de-kube-api-access-868cs\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.050151 master-0 kubenswrapper[7604]: I0309 16:28:34.050062 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.050151 master-0 kubenswrapper[7604]: I0309 16:28:34.050083 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.050151 master-0 kubenswrapper[7604]: I0309 16:28:34.050151 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.050365 master-0 kubenswrapper[7604]: I0309 16:28:34.050225 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.050365 master-0 kubenswrapper[7604]: I0309 16:28:34.050243 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.050365 master-0 kubenswrapper[7604]: I0309 16:28:34.050269 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.050365 master-0 kubenswrapper[7604]: I0309 16:28:34.050306 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkxg\" (UniqueName: \"kubernetes.io/projected/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-kube-api-access-4gkxg\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.050365 master-0 kubenswrapper[7604]: I0309 16:28:34.050337 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.050602 master-0 kubenswrapper[7604]: E0309 16:28:34.050517 7604 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 09 16:28:34.050602 master-0 kubenswrapper[7604]: E0309 16:28:34.050579 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert podName:a320d845-3a5d-4027-a765-f0b2dc07f9de nodeName:}" failed. No retries permitted until 2026-03-09 16:28:34.550558028 +0000 UTC m=+171.604527451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-6zcn7" (UID: "a320d845-3a5d-4027-a765-f0b2dc07f9de") : secret "cloud-credential-operator-serving-cert" not found Mar 09 16:28:34.056448 master-0 kubenswrapper[7604]: I0309 16:28:34.056048 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr"] Mar 09 16:28:34.056720 master-0 kubenswrapper[7604]: I0309 16:28:34.056692 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.063021 master-0 kubenswrapper[7604]: I0309 16:28:34.062966 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.064942 master-0 kubenswrapper[7604]: I0309 16:28:34.064762 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.065952 master-0 kubenswrapper[7604]: I0309 16:28:34.065928 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 09 16:28:34.068266 master-0 kubenswrapper[7604]: I0309 16:28:34.067635 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-wsmcd" Mar 09 16:28:34.074139 master-0 kubenswrapper[7604]: I0309 16:28:34.072293 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.081535 master-0 kubenswrapper[7604]: I0309 16:28:34.079111 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkxg\" (UniqueName: \"kubernetes.io/projected/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-kube-api-access-4gkxg\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.081535 master-0 kubenswrapper[7604]: I0309 16:28:34.080803 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr"] Mar 09 16:28:34.090791 master-0 kubenswrapper[7604]: I0309 16:28:34.089721 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-868cs\" (UniqueName: \"kubernetes.io/projected/a320d845-3a5d-4027-a765-f0b2dc07f9de-kube-api-access-868cs\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.152091 master-0 kubenswrapper[7604]: I0309 16:28:34.152017 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.152091 master-0 kubenswrapper[7604]: I0309 16:28:34.152101 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.152576 master-0 kubenswrapper[7604]: I0309 16:28:34.152132 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.152576 master-0 kubenswrapper[7604]: I0309 16:28:34.152153 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2qw\" (UniqueName: \"kubernetes.io/projected/8796f37c-d1ec-469d-90df-e007bf620e8c-kube-api-access-6n2qw\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.152576 master-0 kubenswrapper[7604]: I0309 16:28:34.152178 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.152576 master-0 kubenswrapper[7604]: I0309 16:28:34.152200 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8796f37c-d1ec-469d-90df-e007bf620e8c-tmpfs\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.152576 master-0 kubenswrapper[7604]: I0309 16:28:34.152297 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.152576 master-0 kubenswrapper[7604]: I0309 16:28:34.152322 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvbf\" (UniqueName: \"kubernetes.io/projected/a6cd9347-eec9-4549-9de4-6033112634ce-kube-api-access-lcvbf\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.153361 master-0 kubenswrapper[7604]: E0309 16:28:34.153184 7604 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 09 16:28:34.153361 master-0 kubenswrapper[7604]: E0309 16:28:34.153259 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls podName:a6cd9347-eec9-4549-9de4-6033112634ce nodeName:}" failed. No retries permitted until 2026-03-09 16:28:34.65323605 +0000 UTC m=+171.707205473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-4qg6v" (UID: "a6cd9347-eec9-4549-9de4-6033112634ce") : secret "machine-api-operator-tls" not found Mar 09 16:28:34.154489 master-0 kubenswrapper[7604]: I0309 16:28:34.154454 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.154801 master-0 kubenswrapper[7604]: I0309 16:28:34.154776 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:28:34.155483 master-0 kubenswrapper[7604]: I0309 16:28:34.155407 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.195813 master-0 kubenswrapper[7604]: I0309 16:28:34.194481 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvbf\" (UniqueName: \"kubernetes.io/projected/a6cd9347-eec9-4549-9de4-6033112634ce-kube-api-access-lcvbf\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.255196 master-0 kubenswrapper[7604]: I0309 16:28:34.253715 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.255196 master-0 kubenswrapper[7604]: I0309 16:28:34.253935 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.255196 master-0 kubenswrapper[7604]: I0309 16:28:34.254021 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2qw\" (UniqueName: \"kubernetes.io/projected/8796f37c-d1ec-469d-90df-e007bf620e8c-kube-api-access-6n2qw\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.255196 master-0 kubenswrapper[7604]: I0309 16:28:34.254051 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8796f37c-d1ec-469d-90df-e007bf620e8c-tmpfs\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.265104 master-0 kubenswrapper[7604]: I0309 16:28:34.263728 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kqtzc"] Mar 09 16:28:34.265104 master-0 kubenswrapper[7604]: I0309 16:28:34.264492 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.265104 master-0 kubenswrapper[7604]: I0309 16:28:34.264513 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.265594 master-0 kubenswrapper[7604]: I0309 16:28:34.265515 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8796f37c-d1ec-469d-90df-e007bf620e8c-tmpfs\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.272934 master-0 kubenswrapper[7604]: I0309 16:28:34.270351 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-chm9n" Mar 09 16:28:34.290707 master-0 kubenswrapper[7604]: I0309 16:28:34.287901 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.323830 master-0 kubenswrapper[7604]: I0309 16:28:34.323768 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dfgzl"] Mar 09 16:28:34.326186 master-0 kubenswrapper[7604]: I0309 16:28:34.326159 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.336239 master-0 kubenswrapper[7604]: I0309 16:28:34.333927 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2qw\" (UniqueName: \"kubernetes.io/projected/8796f37c-d1ec-469d-90df-e007bf620e8c-kube-api-access-6n2qw\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.341961 master-0 kubenswrapper[7604]: I0309 16:28:34.339698 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfgzl"] Mar 09 16:28:34.358512 master-0 kubenswrapper[7604]: I0309 16:28:34.358118 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8hj\" (UniqueName: \"kubernetes.io/projected/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-kube-api-access-wn8hj\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.358512 master-0 kubenswrapper[7604]: I0309 16:28:34.358260 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-hosts-file\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.363886 master-0 kubenswrapper[7604]: I0309 16:28:34.363845 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x9sm5" Mar 09 16:28:34.427936 master-0 kubenswrapper[7604]: I0309 16:28:34.427871 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:34.438037 master-0 kubenswrapper[7604]: I0309 16:28:34.434587 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:28:34.463972 master-0 kubenswrapper[7604]: I0309 16:28:34.461675 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8hj\" (UniqueName: \"kubernetes.io/projected/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-kube-api-access-wn8hj\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.463972 master-0 kubenswrapper[7604]: I0309 16:28:34.461758 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcw7\" (UniqueName: \"kubernetes.io/projected/8946985d-17c1-4617-a678-4d57188fd5e5-kube-api-access-xkcw7\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.463972 master-0 kubenswrapper[7604]: I0309 16:28:34.461793 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-utilities\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.463972 master-0 kubenswrapper[7604]: I0309 16:28:34.461854 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-hosts-file\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.463972 master-0 kubenswrapper[7604]: I0309 16:28:34.461884 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-catalog-content\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.464983 master-0 kubenswrapper[7604]: I0309 16:28:34.464914 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-hosts-file\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.489469 master-0 kubenswrapper[7604]: I0309 16:28:34.488662 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxm65"] Mar 09 16:28:34.492144 master-0 kubenswrapper[7604]: I0309 16:28:34.490059 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.498345 master-0 kubenswrapper[7604]: I0309 16:28:34.492311 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-db2vj" Mar 09 16:28:34.498400 master-0 kubenswrapper[7604]: I0309 16:28:34.498332 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8hj\" (UniqueName: \"kubernetes.io/projected/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-kube-api-access-wn8hj\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.509451 master-0 kubenswrapper[7604]: I0309 16:28:34.508223 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxm65"] Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.565865 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whs9z\" (UniqueName: \"kubernetes.io/projected/e25fcaed-de1e-40d3-8163-61d5d0057bb8-kube-api-access-whs9z\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.565959 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkcw7\" (UniqueName: \"kubernetes.io/projected/8946985d-17c1-4617-a678-4d57188fd5e5-kube-api-access-xkcw7\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.566199 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-utilities\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.566259 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-utilities\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.566600 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.566706 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-utilities\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.566713 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-catalog-content\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.566784 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-catalog-content\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.570480 master-0 kubenswrapper[7604]: I0309 16:28:34.567380 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-catalog-content\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.583497 master-0 kubenswrapper[7604]: I0309 16:28:34.583410 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.584322 master-0 kubenswrapper[7604]: I0309 16:28:34.584272 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkcw7\" (UniqueName: \"kubernetes.io/projected/8946985d-17c1-4617-a678-4d57188fd5e5-kube-api-access-xkcw7\") pod \"redhat-marketplace-dfgzl\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.619094 master-0 kubenswrapper[7604]: I0309 16:28:34.619030 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:28:34.660047 master-0 kubenswrapper[7604]: I0309 16:28:34.659988 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh"] Mar 09 16:28:34.667618 master-0 kubenswrapper[7604]: I0309 16:28:34.667554 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-catalog-content\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.667797 master-0 kubenswrapper[7604]: I0309 16:28:34.667659 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.667797 master-0 kubenswrapper[7604]: I0309 16:28:34.667710 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whs9z\" (UniqueName: \"kubernetes.io/projected/e25fcaed-de1e-40d3-8163-61d5d0057bb8-kube-api-access-whs9z\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.668340 master-0 kubenswrapper[7604]: I0309 16:28:34.668170 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-catalog-content\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.668340 master-0 kubenswrapper[7604]: I0309 16:28:34.668250 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-utilities\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.668727 master-0 kubenswrapper[7604]: I0309 16:28:34.668692 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-utilities\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.683157 master-0 kubenswrapper[7604]: I0309 16:28:34.683102 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.690314 master-0 kubenswrapper[7604]: I0309 16:28:34.689721 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:34.693261 master-0 kubenswrapper[7604]: I0309 16:28:34.693142 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whs9z\" (UniqueName: \"kubernetes.io/projected/e25fcaed-de1e-40d3-8163-61d5d0057bb8-kube-api-access-whs9z\") pod \"redhat-operators-wxm65\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.815521 master-0 kubenswrapper[7604]: I0309 16:28:34.815448 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:34.844571 master-0 kubenswrapper[7604]: I0309 16:28:34.844451 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:28:34.886738 master-0 kubenswrapper[7604]: I0309 16:28:34.886698 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:28:34.960090 master-0 kubenswrapper[7604]: I0309 16:28:34.960040 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr"] Mar 09 16:28:35.887018 master-0 kubenswrapper[7604]: I0309 16:28:35.886966 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:35.890370 master-0 kubenswrapper[7604]: I0309 16:28:35.890307 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:35.926652 master-0 kubenswrapper[7604]: W0309 16:28:35.926595 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8796f37c_d1ec_469d_90df_e007bf620e8c.slice/crio-4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703 WatchSource:0}: Error finding container 4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703: Status 404 returned error can't find the container with id 4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703 Mar 09 16:28:35.931882 master-0 kubenswrapper[7604]: W0309 16:28:35.931703 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d1829b3_643f_4f79_b6de_ae6ca5e78d4a.slice/crio-66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98 WatchSource:0}: Error finding container 66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98: Status 404 returned error can't find the container with id 66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98 Mar 09 16:28:36.035994 master-0 kubenswrapper[7604]: I0309 16:28:36.035936 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sj6x9" Mar 09 16:28:36.817240 master-0 kubenswrapper[7604]: I0309 16:28:36.817177 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" event={"ID":"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a","Type":"ContainerStarted","Data":"66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98"} Mar 09 16:28:36.818602 master-0 kubenswrapper[7604]: I0309 16:28:36.818545 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" event={"ID":"8796f37c-d1ec-469d-90df-e007bf620e8c","Type":"ContainerStarted","Data":"4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703"} Mar 09 16:28:38.466070 master-0 kubenswrapper[7604]: I0309 16:28:38.465646 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfgzl"] Mar 09 16:28:38.477626 master-0 kubenswrapper[7604]: I0309 16:28:38.477580 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-94s4v"] Mar 09 16:28:38.479001 master-0 kubenswrapper[7604]: I0309 16:28:38.478756 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.489220 master-0 kubenswrapper[7604]: I0309 16:28:38.489176 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-5rw6v" Mar 09 16:28:38.489459 master-0 kubenswrapper[7604]: I0309 16:28:38.489436 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 09 16:28:38.538348 master-0 kubenswrapper[7604]: I0309 16:28:38.538142 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/baf704e3-daf2-4934-a04e-d31df8df0c4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.538694 master-0 kubenswrapper[7604]: I0309 16:28:38.538383 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/baf704e3-daf2-4934-a04e-d31df8df0c4a-rootfs\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.538694 master-0 kubenswrapper[7604]: I0309 16:28:38.538413 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.538694 master-0 kubenswrapper[7604]: I0309 16:28:38.538473 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhglf\" (UniqueName: \"kubernetes.io/projected/baf704e3-daf2-4934-a04e-d31df8df0c4a-kube-api-access-nhglf\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.641137 master-0 kubenswrapper[7604]: I0309 16:28:38.640462 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.641137 master-0 kubenswrapper[7604]: I0309 16:28:38.640545 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhglf\" (UniqueName: \"kubernetes.io/projected/baf704e3-daf2-4934-a04e-d31df8df0c4a-kube-api-access-nhglf\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.641137 master-0 kubenswrapper[7604]: I0309 16:28:38.640594 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/baf704e3-daf2-4934-a04e-d31df8df0c4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.641137 master-0 kubenswrapper[7604]: I0309 16:28:38.640642 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/baf704e3-daf2-4934-a04e-d31df8df0c4a-rootfs\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.641137 master-0 kubenswrapper[7604]: I0309 16:28:38.640728 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/baf704e3-daf2-4934-a04e-d31df8df0c4a-rootfs\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.642516 master-0 kubenswrapper[7604]: I0309 16:28:38.642015 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/baf704e3-daf2-4934-a04e-d31df8df0c4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.643958 master-0 kubenswrapper[7604]: I0309 16:28:38.643922 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.681501 master-0 kubenswrapper[7604]: I0309 16:28:38.679454 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhglf\" (UniqueName: \"kubernetes.io/projected/baf704e3-daf2-4934-a04e-d31df8df0c4a-kube-api-access-nhglf\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.802510 master-0 kubenswrapper[7604]: I0309 16:28:38.801577 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:28:38.859368 master-0 kubenswrapper[7604]: I0309 16:28:38.859317 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrg"] Mar 09 16:28:38.866691 master-0 kubenswrapper[7604]: I0309 16:28:38.866600 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:38.867451 master-0 kubenswrapper[7604]: I0309 16:28:38.867040 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrg"] Mar 09 16:28:38.949617 master-0 kubenswrapper[7604]: I0309 16:28:38.945211 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98bk\" (UniqueName: \"kubernetes.io/projected/be856881-2ceb-4803-8330-4a27ad8b1937-kube-api-access-v98bk\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:38.949617 master-0 kubenswrapper[7604]: I0309 16:28:38.945405 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-catalog-content\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:38.949617 master-0 kubenswrapper[7604]: I0309 16:28:38.945529 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-utilities\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.047139 master-0 kubenswrapper[7604]: I0309 16:28:39.047068 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-catalog-content\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.047346 master-0 kubenswrapper[7604]: I0309 16:28:39.047299 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-utilities\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.047401 master-0 kubenswrapper[7604]: I0309 16:28:39.047371 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98bk\" (UniqueName: \"kubernetes.io/projected/be856881-2ceb-4803-8330-4a27ad8b1937-kube-api-access-v98bk\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.048304 master-0 kubenswrapper[7604]: I0309 16:28:39.047879 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-utilities\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.048304 master-0 kubenswrapper[7604]: I0309 16:28:39.048093 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-catalog-content\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.066784 master-0 kubenswrapper[7604]: I0309 16:28:39.065638 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98bk\" (UniqueName: \"kubernetes.io/projected/be856881-2ceb-4803-8330-4a27ad8b1937-kube-api-access-v98bk\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.193290 master-0 kubenswrapper[7604]: I0309 16:28:39.193225 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:28:39.249310 master-0 kubenswrapper[7604]: I0309 16:28:39.249185 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxm65"] Mar 09 16:28:39.465095 master-0 kubenswrapper[7604]: I0309 16:28:39.465012 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49bwx"] Mar 09 16:28:39.466871 master-0 kubenswrapper[7604]: I0309 16:28:39.466808 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.487443 master-0 kubenswrapper[7604]: I0309 16:28:39.487287 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49bwx"] Mar 09 16:28:39.561232 master-0 kubenswrapper[7604]: I0309 16:28:39.561140 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-utilities\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.561466 master-0 kubenswrapper[7604]: I0309 16:28:39.561298 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-catalog-content\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.561730 master-0 kubenswrapper[7604]: I0309 16:28:39.561693 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgl27\" (UniqueName: \"kubernetes.io/projected/1da6f189-535a-4bf1-bbdb-758327651ae2-kube-api-access-xgl27\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.665778 master-0 kubenswrapper[7604]: I0309 16:28:39.664841 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-utilities\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.665778 master-0 kubenswrapper[7604]: I0309 16:28:39.664909 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-catalog-content\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.665778 master-0 kubenswrapper[7604]: I0309 16:28:39.665011 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgl27\" (UniqueName: \"kubernetes.io/projected/1da6f189-535a-4bf1-bbdb-758327651ae2-kube-api-access-xgl27\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.666048 master-0 kubenswrapper[7604]: I0309 16:28:39.665975 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-utilities\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.666483 master-0 kubenswrapper[7604]: I0309 16:28:39.666386 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-catalog-content\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.681046 master-0 kubenswrapper[7604]: I0309 16:28:39.681001 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgl27\" (UniqueName: \"kubernetes.io/projected/1da6f189-535a-4bf1-bbdb-758327651ae2-kube-api-access-xgl27\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:39.799612 master-0 kubenswrapper[7604]: I0309 16:28:39.798882 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:28:46.771341 master-0 kubenswrapper[7604]: I0309 16:28:46.769885 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp"] Mar 09 16:28:50.490434 master-0 kubenswrapper[7604]: W0309 16:28:50.487220 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b1790eb_a3b2_4cc6_9f0a_f5fb56137c6d.slice/crio-426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c WatchSource:0}: Error finding container 426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c: Status 404 returned error can't find the container with id 426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c Mar 09 16:28:50.490434 master-0 kubenswrapper[7604]: W0309 16:28:50.490076 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaf704e3_daf2_4934_a04e_d31df8df0c4a.slice/crio-2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef WatchSource:0}: Error finding container 2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef: Status 404 returned error can't find the container with id 2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef Mar 09 16:28:50.862025 master-0 kubenswrapper[7604]: I0309 16:28:50.861972 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v"] Mar 09 16:28:50.913910 master-0 kubenswrapper[7604]: I0309 16:28:50.913858 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" event={"ID":"34c0b60e-da69-452d-858d-0af77f18946d","Type":"ContainerStarted","Data":"a1d7f21b91418c1f79a78b411050f3c049413bf2aa574da18763a4647f55d117"} Mar 09 16:28:50.926625 master-0 kubenswrapper[7604]: I0309 16:28:50.925620 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kqtzc" event={"ID":"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d","Type":"ContainerStarted","Data":"426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c"} Mar 09 16:28:50.928999 master-0 kubenswrapper[7604]: I0309 16:28:50.928702 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" event={"ID":"baf704e3-daf2-4934-a04e-d31df8df0c4a","Type":"ContainerStarted","Data":"1e1a71876c065e4a2ec92b9c6d57e6068b1dc43657251449e7b4895e935a9448"} Mar 09 16:28:50.928999 master-0 kubenswrapper[7604]: I0309 16:28:50.928773 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" event={"ID":"baf704e3-daf2-4934-a04e-d31df8df0c4a","Type":"ContainerStarted","Data":"2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef"} Mar 09 16:28:50.932790 master-0 kubenswrapper[7604]: I0309 16:28:50.932407 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" event={"ID":"357570a4-f69b-4970-9b6f-fe06fc4c2f90","Type":"ContainerStarted","Data":"da01301d90c8ec36dd26e650eefd6003d2c0b759242bb4c2d47a570d6b83fec7"} Mar 09 16:28:50.967360 master-0 kubenswrapper[7604]: I0309 16:28:50.962291 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" podStartSLOduration=4.276276868 podStartE2EDuration="19.962272486s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:28:32.507618315 +0000 UTC m=+169.561587738" lastFinishedPulling="2026-03-09 16:28:48.193613933 +0000 UTC m=+185.247583356" observedRunningTime="2026-03-09 16:28:50.959861226 +0000 UTC m=+188.013830659" watchObservedRunningTime="2026-03-09 16:28:50.962272486 +0000 UTC m=+188.016241899" Mar 09 16:28:50.967360 master-0 kubenswrapper[7604]: I0309 16:28:50.965214 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" event={"ID":"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a","Type":"ContainerStarted","Data":"20464160fe62b20d4604c4241b261bf11ea2dcd009978635a17fe0c1b62a89ab"} Mar 09 16:28:50.967360 master-0 kubenswrapper[7604]: I0309 16:28:50.966725 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxm65"] Mar 09 16:28:51.021113 master-0 kubenswrapper[7604]: I0309 16:28:51.020789 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" event={"ID":"8796f37c-d1ec-469d-90df-e007bf620e8c","Type":"ContainerStarted","Data":"64b8f39412ffd069823ee4379537073fa507fd69a0396dcd45edeef358dfad47"} Mar 09 16:28:51.021390 master-0 kubenswrapper[7604]: I0309 16:28:51.021322 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:51.025874 master-0 kubenswrapper[7604]: I0309 16:28:51.025800 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" event={"ID":"631f2bdf-2ed4-4315-98c3-c5a538d0aec3","Type":"ContainerStarted","Data":"ec2bd4079a912677c69adce5f15ccbeec93411cab07eef7010dd35a99bc07993"} Mar 09 16:28:51.039902 master-0 kubenswrapper[7604]: I0309 16:28:51.035733 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49bwx"] Mar 09 16:28:51.044170 master-0 kubenswrapper[7604]: I0309 16:28:51.044126 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfgzl"] Mar 09 16:28:51.065953 master-0 kubenswrapper[7604]: I0309 16:28:51.065878 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" podStartSLOduration=18.065857486 podStartE2EDuration="18.065857486s" podCreationTimestamp="2026-03-09 16:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:28:51.063989692 +0000 UTC m=+188.117959125" watchObservedRunningTime="2026-03-09 16:28:51.065857486 +0000 UTC m=+188.119826919" Mar 09 16:28:51.089150 master-0 kubenswrapper[7604]: W0309 16:28:51.089065 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25fcaed_de1e_40d3_8163_61d5d0057bb8.slice/crio-719f513fd7df82250e72105e3654c9cb3e7a0601471be50d2e84638dcfb8c2c5 WatchSource:0}: Error finding container 719f513fd7df82250e72105e3654c9cb3e7a0601471be50d2e84638dcfb8c2c5: Status 404 returned error can't find the container with id 719f513fd7df82250e72105e3654c9cb3e7a0601471be50d2e84638dcfb8c2c5 Mar 09 16:28:51.092925 master-0 kubenswrapper[7604]: I0309 16:28:51.092850 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" podStartSLOduration=2.535936455 podStartE2EDuration="20.092829401s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:28:32.844612118 +0000 UTC m=+169.898581541" lastFinishedPulling="2026-03-09 16:28:50.401505064 +0000 UTC m=+187.455474487" observedRunningTime="2026-03-09 16:28:51.089350298 +0000 UTC m=+188.143319721" watchObservedRunningTime="2026-03-09 16:28:51.092829401 +0000 UTC m=+188.146798824" Mar 09 16:28:51.097098 master-0 kubenswrapper[7604]: W0309 16:28:51.097020 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8946985d_17c1_4617_a678_4d57188fd5e5.slice/crio-197e39373109f2b561307554a2515def8c543bdb76e60c49e55466638b95d82b WatchSource:0}: Error finding container 197e39373109f2b561307554a2515def8c543bdb76e60c49e55466638b95d82b: Status 404 returned error can't find the container with id 197e39373109f2b561307554a2515def8c543bdb76e60c49e55466638b95d82b Mar 09 16:28:51.103926 master-0 kubenswrapper[7604]: W0309 16:28:51.103843 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1da6f189_535a_4bf1_bbdb_758327651ae2.slice/crio-3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546 WatchSource:0}: Error finding container 3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546: Status 404 returned error can't find the container with id 3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546 Mar 09 16:28:51.207868 master-0 kubenswrapper[7604]: I0309 16:28:51.204659 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sj6x9"] Mar 09 16:28:51.254542 master-0 kubenswrapper[7604]: I0309 16:28:51.253218 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7"] Mar 09 16:28:51.254542 master-0 kubenswrapper[7604]: I0309 16:28:51.253291 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcvrg"] Mar 09 16:28:51.289990 master-0 kubenswrapper[7604]: W0309 16:28:51.286597 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe856881_2ceb_4803_8330_4a27ad8b1937.slice/crio-bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217 WatchSource:0}: Error finding container bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217: Status 404 returned error can't find the container with id bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217 Mar 09 16:28:51.560744 master-0 kubenswrapper[7604]: I0309 16:28:51.560698 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:28:52.033077 master-0 kubenswrapper[7604]: I0309 16:28:52.032772 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" event={"ID":"812b321a-943d-4716-a17c-7f805333ef42","Type":"ContainerStarted","Data":"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c"} Mar 09 16:28:52.033077 master-0 kubenswrapper[7604]: I0309 16:28:52.032850 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="kube-rbac-proxy" containerID="cri-o://5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836" gracePeriod=30 Mar 09 16:28:52.033077 master-0 kubenswrapper[7604]: I0309 16:28:52.032933 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="machine-approver-controller" containerID="cri-o://fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c" gracePeriod=30 Mar 09 16:28:52.053659 master-0 kubenswrapper[7604]: I0309 16:28:52.051144 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" podStartSLOduration=5.384109929 podStartE2EDuration="21.051125768s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:28:32.526601284 +0000 UTC m=+169.580570707" lastFinishedPulling="2026-03-09 16:28:48.193617113 +0000 UTC m=+185.247586546" observedRunningTime="2026-03-09 16:28:52.046873323 +0000 UTC m=+189.100842766" watchObservedRunningTime="2026-03-09 16:28:52.051125768 +0000 UTC m=+189.105095191" Mar 09 16:28:52.055501 master-0 kubenswrapper[7604]: I0309 16:28:52.055416 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kqtzc" event={"ID":"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d","Type":"ContainerStarted","Data":"88716e3ab70738e7b73a10cd8011db95aaaddf8b2d5d41cd7e0999e112d84f2d"} Mar 09 16:28:52.062339 master-0 kubenswrapper[7604]: I0309 16:28:52.062268 7604 generic.go:334] "Generic (PLEG): container finished" podID="3745c679-2ea9-4382-9270-4d3fbbaaf296" containerID="c375602309b4389668beef44b0297110b18bbf2efc79b2919215e7134a14a3e3" exitCode=0 Mar 09 16:28:52.062339 master-0 kubenswrapper[7604]: I0309 16:28:52.062318 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gkw8" event={"ID":"3745c679-2ea9-4382-9270-4d3fbbaaf296","Type":"ContainerDied","Data":"c375602309b4389668beef44b0297110b18bbf2efc79b2919215e7134a14a3e3"} Mar 09 16:28:52.068948 master-0 kubenswrapper[7604]: I0309 16:28:52.065784 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" event={"ID":"34c0b60e-da69-452d-858d-0af77f18946d","Type":"ContainerStarted","Data":"0066abffaef5dc4626ef847bc5d319a4d29adc170901d8d3c79af35b659c73c9"} Mar 09 16:28:52.068948 master-0 kubenswrapper[7604]: I0309 16:28:52.067711 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sj6x9" event={"ID":"5587e967-124e-4f2a-b7fb-42cb16bfc337","Type":"ContainerStarted","Data":"fd39f0db4c8cb49b906ba36723dbeb15b7ced8a9a0505c21a799794cabf48a9c"} Mar 09 16:28:52.074563 master-0 kubenswrapper[7604]: I0309 16:28:52.073286 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kqtzc" podStartSLOduration=18.073265 podStartE2EDuration="18.073265s" podCreationTimestamp="2026-03-09 16:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:28:52.070999643 +0000 UTC m=+189.124969086" watchObservedRunningTime="2026-03-09 16:28:52.073265 +0000 UTC m=+189.127234433" Mar 09 16:28:52.080211 master-0 kubenswrapper[7604]: I0309 16:28:52.079446 7604 generic.go:334] "Generic (PLEG): container finished" podID="aec186fc-aead-47fb-a7e1-8c9325897c47" containerID="c4eb68e7264550f4ffbefbb8ac663e749aa15295f8af2d3fc21d82134f75fd3a" exitCode=0 Mar 09 16:28:52.080211 master-0 kubenswrapper[7604]: I0309 16:28:52.079525 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqjw" event={"ID":"aec186fc-aead-47fb-a7e1-8c9325897c47","Type":"ContainerDied","Data":"c4eb68e7264550f4ffbefbb8ac663e749aa15295f8af2d3fc21d82134f75fd3a"} Mar 09 16:28:52.090226 master-0 kubenswrapper[7604]: I0309 16:28:52.090170 7604 generic.go:334] "Generic (PLEG): container finished" podID="e25fcaed-de1e-40d3-8163-61d5d0057bb8" containerID="b1b3ef7bb6ad7f9db884177c5218a9385fb4d8fc64928b99c59cc91517299920" exitCode=0 Mar 09 16:28:52.090313 master-0 kubenswrapper[7604]: I0309 16:28:52.090253 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxm65" event={"ID":"e25fcaed-de1e-40d3-8163-61d5d0057bb8","Type":"ContainerDied","Data":"b1b3ef7bb6ad7f9db884177c5218a9385fb4d8fc64928b99c59cc91517299920"} Mar 09 16:28:52.090313 master-0 kubenswrapper[7604]: I0309 16:28:52.090278 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxm65" event={"ID":"e25fcaed-de1e-40d3-8163-61d5d0057bb8","Type":"ContainerStarted","Data":"719f513fd7df82250e72105e3654c9cb3e7a0601471be50d2e84638dcfb8c2c5"} Mar 09 16:28:52.094621 master-0 kubenswrapper[7604]: I0309 16:28:52.093787 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" event={"ID":"a320d845-3a5d-4027-a765-f0b2dc07f9de","Type":"ContainerStarted","Data":"312e0d000f0838892125bebe50178c057890fa63491797d6753b9b9e748b57c3"} Mar 09 16:28:52.094621 master-0 kubenswrapper[7604]: I0309 16:28:52.093813 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" event={"ID":"a320d845-3a5d-4027-a765-f0b2dc07f9de","Type":"ContainerStarted","Data":"91158bad31d126f335930945d685253a8862c41cc0ef9e00a780fb2229ca874e"} Mar 09 16:28:52.095201 master-0 kubenswrapper[7604]: I0309 16:28:52.095135 7604 generic.go:334] "Generic (PLEG): container finished" podID="be856881-2ceb-4803-8330-4a27ad8b1937" containerID="b8b2f1d085aa9dfc2fab38e228753a6c99a8279f2a4596b733cb32f506c3c80e" exitCode=0 Mar 09 16:28:52.095201 master-0 kubenswrapper[7604]: I0309 16:28:52.095171 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrg" event={"ID":"be856881-2ceb-4803-8330-4a27ad8b1937","Type":"ContainerDied","Data":"b8b2f1d085aa9dfc2fab38e228753a6c99a8279f2a4596b733cb32f506c3c80e"} Mar 09 16:28:52.095201 master-0 kubenswrapper[7604]: I0309 16:28:52.095185 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrg" event={"ID":"be856881-2ceb-4803-8330-4a27ad8b1937","Type":"ContainerStarted","Data":"bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217"} Mar 09 16:28:52.130087 master-0 kubenswrapper[7604]: I0309 16:28:52.129918 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerStarted","Data":"a4a0956582684a89e1b7b4f0778806b9ef37e71e42a1b64feda5a71fde3ea4d6"} Mar 09 16:28:52.132796 master-0 kubenswrapper[7604]: I0309 16:28:52.132489 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" podStartSLOduration=3.6856762290000002 podStartE2EDuration="21.132462773s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:28:33.035791397 +0000 UTC m=+170.089760820" lastFinishedPulling="2026-03-09 16:28:50.482577941 +0000 UTC m=+187.536547364" observedRunningTime="2026-03-09 16:28:52.122705766 +0000 UTC m=+189.176675209" watchObservedRunningTime="2026-03-09 16:28:52.132462773 +0000 UTC m=+189.186432216" Mar 09 16:28:52.136729 master-0 kubenswrapper[7604]: I0309 16:28:52.135878 7604 generic.go:334] "Generic (PLEG): container finished" podID="1da6f189-535a-4bf1-bbdb-758327651ae2" containerID="182df7a2500961e13e750c1e7666f2ebae9c039f790cc286ba67b25badf99579" exitCode=0 Mar 09 16:28:52.136729 master-0 kubenswrapper[7604]: I0309 16:28:52.135949 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49bwx" event={"ID":"1da6f189-535a-4bf1-bbdb-758327651ae2","Type":"ContainerDied","Data":"182df7a2500961e13e750c1e7666f2ebae9c039f790cc286ba67b25badf99579"} Mar 09 16:28:52.136729 master-0 kubenswrapper[7604]: I0309 16:28:52.135979 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49bwx" event={"ID":"1da6f189-535a-4bf1-bbdb-758327651ae2","Type":"ContainerStarted","Data":"3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546"} Mar 09 16:28:52.138810 master-0 kubenswrapper[7604]: I0309 16:28:52.138500 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" event={"ID":"a6cd9347-eec9-4549-9de4-6033112634ce","Type":"ContainerStarted","Data":"02cbf965d5e0c4ffaa472e2cfe7e841ade99cf8d72dd549979a2aa283c5f89eb"} Mar 09 16:28:52.138810 master-0 kubenswrapper[7604]: I0309 16:28:52.138534 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" event={"ID":"a6cd9347-eec9-4549-9de4-6033112634ce","Type":"ContainerStarted","Data":"eb70b637ebcdf20545438ca3a9998bdd103e60d200280f4b769a5fd812b5a907"} Mar 09 16:28:52.162492 master-0 kubenswrapper[7604]: I0309 16:28:52.156243 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" event={"ID":"8be2517a-6f28-4289-a108-6e3345a1e246","Type":"ContainerStarted","Data":"849641fef697929d82e47cd74e196c87b6f13e825237b99e39d16fe99de91e48"} Mar 09 16:28:52.175623 master-0 kubenswrapper[7604]: I0309 16:28:52.175563 7604 generic.go:334] "Generic (PLEG): container finished" podID="8946985d-17c1-4617-a678-4d57188fd5e5" containerID="451c7d8d5455531481ef3352e732aa0024b5f270a1ea850ed54740c7f0d0d61c" exitCode=0 Mar 09 16:28:52.176092 master-0 kubenswrapper[7604]: I0309 16:28:52.175709 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfgzl" event={"ID":"8946985d-17c1-4617-a678-4d57188fd5e5","Type":"ContainerDied","Data":"451c7d8d5455531481ef3352e732aa0024b5f270a1ea850ed54740c7f0d0d61c"} Mar 09 16:28:52.176092 master-0 kubenswrapper[7604]: I0309 16:28:52.175759 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfgzl" event={"ID":"8946985d-17c1-4617-a678-4d57188fd5e5","Type":"ContainerStarted","Data":"197e39373109f2b561307554a2515def8c543bdb76e60c49e55466638b95d82b"} Mar 09 16:28:52.195684 master-0 kubenswrapper[7604]: I0309 16:28:52.195573 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" event={"ID":"baf704e3-daf2-4934-a04e-d31df8df0c4a","Type":"ContainerStarted","Data":"079ea5d29885897e15bc72beffe86f14e7978531d12efc018200342b33d36fbc"} Mar 09 16:28:52.197600 master-0 kubenswrapper[7604]: I0309 16:28:52.197578 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:52.250440 master-0 kubenswrapper[7604]: I0309 16:28:52.246744 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" podStartSLOduration=3.6944374570000003 podStartE2EDuration="21.246712777s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:28:32.788133795 +0000 UTC m=+169.842103218" lastFinishedPulling="2026-03-09 16:28:50.340409115 +0000 UTC m=+187.394378538" observedRunningTime="2026-03-09 16:28:52.239263028 +0000 UTC m=+189.293232461" watchObservedRunningTime="2026-03-09 16:28:52.246712777 +0000 UTC m=+189.300682240" Mar 09 16:28:52.261895 master-0 kubenswrapper[7604]: I0309 16:28:52.261845 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q455j\" (UniqueName: \"kubernetes.io/projected/812b321a-943d-4716-a17c-7f805333ef42-kube-api-access-q455j\") pod \"812b321a-943d-4716-a17c-7f805333ef42\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " Mar 09 16:28:52.262263 master-0 kubenswrapper[7604]: I0309 16:28:52.262246 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-auth-proxy-config\") pod \"812b321a-943d-4716-a17c-7f805333ef42\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " Mar 09 16:28:52.263044 master-0 kubenswrapper[7604]: I0309 16:28:52.263027 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/812b321a-943d-4716-a17c-7f805333ef42-machine-approver-tls\") pod \"812b321a-943d-4716-a17c-7f805333ef42\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " Mar 09 16:28:52.263240 master-0 kubenswrapper[7604]: I0309 16:28:52.263219 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-config\") pod \"812b321a-943d-4716-a17c-7f805333ef42\" (UID: \"812b321a-943d-4716-a17c-7f805333ef42\") " Mar 09 16:28:52.263975 master-0 kubenswrapper[7604]: I0309 16:28:52.262966 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "812b321a-943d-4716-a17c-7f805333ef42" (UID: "812b321a-943d-4716-a17c-7f805333ef42"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:28:52.265256 master-0 kubenswrapper[7604]: I0309 16:28:52.264513 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812b321a-943d-4716-a17c-7f805333ef42-kube-api-access-q455j" (OuterVolumeSpecName: "kube-api-access-q455j") pod "812b321a-943d-4716-a17c-7f805333ef42" (UID: "812b321a-943d-4716-a17c-7f805333ef42"). InnerVolumeSpecName "kube-api-access-q455j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:28:52.265700 master-0 kubenswrapper[7604]: I0309 16:28:52.265477 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-config" (OuterVolumeSpecName: "config") pod "812b321a-943d-4716-a17c-7f805333ef42" (UID: "812b321a-943d-4716-a17c-7f805333ef42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:28:52.282260 master-0 kubenswrapper[7604]: I0309 16:28:52.282218 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812b321a-943d-4716-a17c-7f805333ef42-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "812b321a-943d-4716-a17c-7f805333ef42" (UID: "812b321a-943d-4716-a17c-7f805333ef42"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:28:52.289550 master-0 kubenswrapper[7604]: I0309 16:28:52.289487 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" podStartSLOduration=14.289469216 podStartE2EDuration="14.289469216s" podCreationTimestamp="2026-03-09 16:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:28:52.289021743 +0000 UTC m=+189.342991176" watchObservedRunningTime="2026-03-09 16:28:52.289469216 +0000 UTC m=+189.343438649" Mar 09 16:28:52.373541 master-0 kubenswrapper[7604]: I0309 16:28:52.364724 7604 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.373541 master-0 kubenswrapper[7604]: I0309 16:28:52.364759 7604 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/812b321a-943d-4716-a17c-7f805333ef42-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.373541 master-0 kubenswrapper[7604]: I0309 16:28:52.364768 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/812b321a-943d-4716-a17c-7f805333ef42-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.373541 master-0 kubenswrapper[7604]: I0309 16:28:52.364777 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q455j\" (UniqueName: \"kubernetes.io/projected/812b321a-943d-4716-a17c-7f805333ef42-kube-api-access-q455j\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.513144 master-0 kubenswrapper[7604]: I0309 16:28:52.508687 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:52.568864 master-0 kubenswrapper[7604]: I0309 16:28:52.568729 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-catalog-content\") pod \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " Mar 09 16:28:52.568864 master-0 kubenswrapper[7604]: I0309 16:28:52.568863 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-utilities\") pod \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " Mar 09 16:28:52.569506 master-0 kubenswrapper[7604]: I0309 16:28:52.568892 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whs9z\" (UniqueName: \"kubernetes.io/projected/e25fcaed-de1e-40d3-8163-61d5d0057bb8-kube-api-access-whs9z\") pod \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\" (UID: \"e25fcaed-de1e-40d3-8163-61d5d0057bb8\") " Mar 09 16:28:52.569506 master-0 kubenswrapper[7604]: I0309 16:28:52.569183 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e25fcaed-de1e-40d3-8163-61d5d0057bb8" (UID: "e25fcaed-de1e-40d3-8163-61d5d0057bb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:28:52.579589 master-0 kubenswrapper[7604]: I0309 16:28:52.570571 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-utilities" (OuterVolumeSpecName: "utilities") pod "e25fcaed-de1e-40d3-8163-61d5d0057bb8" (UID: "e25fcaed-de1e-40d3-8163-61d5d0057bb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:28:52.579589 master-0 kubenswrapper[7604]: I0309 16:28:52.574862 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25fcaed-de1e-40d3-8163-61d5d0057bb8-kube-api-access-whs9z" (OuterVolumeSpecName: "kube-api-access-whs9z") pod "e25fcaed-de1e-40d3-8163-61d5d0057bb8" (UID: "e25fcaed-de1e-40d3-8163-61d5d0057bb8"). InnerVolumeSpecName "kube-api-access-whs9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:28:52.647253 master-0 kubenswrapper[7604]: I0309 16:28:52.646844 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:52.673101 master-0 kubenswrapper[7604]: I0309 16:28:52.672924 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-utilities\") pod \"8946985d-17c1-4617-a678-4d57188fd5e5\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " Mar 09 16:28:52.673800 master-0 kubenswrapper[7604]: I0309 16:28:52.673748 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkcw7\" (UniqueName: \"kubernetes.io/projected/8946985d-17c1-4617-a678-4d57188fd5e5-kube-api-access-xkcw7\") pod \"8946985d-17c1-4617-a678-4d57188fd5e5\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " Mar 09 16:28:52.673800 master-0 kubenswrapper[7604]: I0309 16:28:52.673747 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-utilities" (OuterVolumeSpecName: "utilities") pod "8946985d-17c1-4617-a678-4d57188fd5e5" (UID: "8946985d-17c1-4617-a678-4d57188fd5e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:28:52.674548 master-0 kubenswrapper[7604]: I0309 16:28:52.673999 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-catalog-content\") pod \"8946985d-17c1-4617-a678-4d57188fd5e5\" (UID: \"8946985d-17c1-4617-a678-4d57188fd5e5\") " Mar 09 16:28:52.675098 master-0 kubenswrapper[7604]: I0309 16:28:52.674495 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8946985d-17c1-4617-a678-4d57188fd5e5" (UID: "8946985d-17c1-4617-a678-4d57188fd5e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:28:52.677185 master-0 kubenswrapper[7604]: I0309 16:28:52.676881 7604 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-utilities\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.677185 master-0 kubenswrapper[7604]: I0309 16:28:52.676907 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whs9z\" (UniqueName: \"kubernetes.io/projected/e25fcaed-de1e-40d3-8163-61d5d0057bb8-kube-api-access-whs9z\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.677185 master-0 kubenswrapper[7604]: I0309 16:28:52.676918 7604 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.677185 master-0 kubenswrapper[7604]: I0309 16:28:52.676927 7604 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25fcaed-de1e-40d3-8163-61d5d0057bb8-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.677185 master-0 kubenswrapper[7604]: I0309 16:28:52.676938 7604 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8946985d-17c1-4617-a678-4d57188fd5e5-utilities\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.678660 master-0 kubenswrapper[7604]: I0309 16:28:52.677657 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8946985d-17c1-4617-a678-4d57188fd5e5-kube-api-access-xkcw7" (OuterVolumeSpecName: "kube-api-access-xkcw7") pod "8946985d-17c1-4617-a678-4d57188fd5e5" (UID: "8946985d-17c1-4617-a678-4d57188fd5e5"). InnerVolumeSpecName "kube-api-access-xkcw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:28:52.778359 master-0 kubenswrapper[7604]: I0309 16:28:52.778307 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkcw7\" (UniqueName: \"kubernetes.io/projected/8946985d-17c1-4617-a678-4d57188fd5e5-kube-api-access-xkcw7\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:52.900238 master-0 kubenswrapper[7604]: I0309 16:28:52.900049 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq"] Mar 09 16:28:53.223923 master-0 kubenswrapper[7604]: I0309 16:28:53.223873 7604 generic.go:334] "Generic (PLEG): container finished" podID="812b321a-943d-4716-a17c-7f805333ef42" containerID="fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c" exitCode=0 Mar 09 16:28:53.223923 master-0 kubenswrapper[7604]: I0309 16:28:53.223913 7604 generic.go:334] "Generic (PLEG): container finished" podID="812b321a-943d-4716-a17c-7f805333ef42" containerID="5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836" exitCode=0 Mar 09 16:28:53.224940 master-0 kubenswrapper[7604]: I0309 16:28:53.223955 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" event={"ID":"812b321a-943d-4716-a17c-7f805333ef42","Type":"ContainerDied","Data":"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c"} Mar 09 16:28:53.224940 master-0 kubenswrapper[7604]: I0309 16:28:53.223983 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" event={"ID":"812b321a-943d-4716-a17c-7f805333ef42","Type":"ContainerDied","Data":"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836"} Mar 09 16:28:53.224940 master-0 kubenswrapper[7604]: I0309 16:28:53.223998 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" event={"ID":"812b321a-943d-4716-a17c-7f805333ef42","Type":"ContainerDied","Data":"985254158c4efd99b6461a9bedfc34bb053a6c3692cd6787bbf20cc60ab472ae"} Mar 09 16:28:53.224940 master-0 kubenswrapper[7604]: I0309 16:28:53.224017 7604 scope.go:117] "RemoveContainer" containerID="fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c" Mar 09 16:28:53.224940 master-0 kubenswrapper[7604]: I0309 16:28:53.224181 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp" Mar 09 16:28:53.227481 master-0 kubenswrapper[7604]: I0309 16:28:53.227411 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerStarted","Data":"4aba00ad12852a446660c41e3679cb36779a9e833f460f5150a8edd0cdeb5825"} Mar 09 16:28:53.227662 master-0 kubenswrapper[7604]: I0309 16:28:53.227486 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerStarted","Data":"4a254137929f22b40ff0b2bad7179c0ca99ab4b48a4ce338bafb6ae74b824778"} Mar 09 16:28:53.235350 master-0 kubenswrapper[7604]: I0309 16:28:53.235296 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfgzl" event={"ID":"8946985d-17c1-4617-a678-4d57188fd5e5","Type":"ContainerDied","Data":"197e39373109f2b561307554a2515def8c543bdb76e60c49e55466638b95d82b"} Mar 09 16:28:53.235552 master-0 kubenswrapper[7604]: I0309 16:28:53.235410 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfgzl" Mar 09 16:28:53.242328 master-0 kubenswrapper[7604]: I0309 16:28:53.242282 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gkw8" event={"ID":"3745c679-2ea9-4382-9270-4d3fbbaaf296","Type":"ContainerStarted","Data":"4262eb871e8b28e42f3db427050356711afde94262326c48308170ad5e42bdae"} Mar 09 16:28:53.246161 master-0 kubenswrapper[7604]: I0309 16:28:53.246101 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqjw" event={"ID":"aec186fc-aead-47fb-a7e1-8c9325897c47","Type":"ContainerStarted","Data":"d0c506088b74eeaa84023fa58de803bd40314ce34d17be20b2d54a752348f036"} Mar 09 16:28:53.251138 master-0 kubenswrapper[7604]: I0309 16:28:53.251084 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxm65" event={"ID":"e25fcaed-de1e-40d3-8163-61d5d0057bb8","Type":"ContainerDied","Data":"719f513fd7df82250e72105e3654c9cb3e7a0601471be50d2e84638dcfb8c2c5"} Mar 09 16:28:53.251287 master-0 kubenswrapper[7604]: I0309 16:28:53.251159 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxm65" Mar 09 16:28:53.335170 master-0 kubenswrapper[7604]: I0309 16:28:53.331689 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfgzl"] Mar 09 16:28:53.348456 master-0 kubenswrapper[7604]: I0309 16:28:53.337147 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfgzl"] Mar 09 16:28:53.355507 master-0 kubenswrapper[7604]: I0309 16:28:53.351875 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zrqjw" podStartSLOduration=3.2786138559999998 podStartE2EDuration="25.351855398s" podCreationTimestamp="2026-03-09 16:28:28 +0000 UTC" firstStartedPulling="2026-03-09 16:28:30.668466041 +0000 UTC m=+167.722435464" lastFinishedPulling="2026-03-09 16:28:52.741707583 +0000 UTC m=+189.795677006" observedRunningTime="2026-03-09 16:28:53.350077256 +0000 UTC m=+190.404046699" watchObservedRunningTime="2026-03-09 16:28:53.351855398 +0000 UTC m=+190.405824821" Mar 09 16:28:53.395496 master-0 kubenswrapper[7604]: I0309 16:28:53.390481 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8gkw8" podStartSLOduration=2.452589385 podStartE2EDuration="24.390461281s" podCreationTimestamp="2026-03-09 16:28:29 +0000 UTC" firstStartedPulling="2026-03-09 16:28:30.666528844 +0000 UTC m=+167.720498267" lastFinishedPulling="2026-03-09 16:28:52.60440074 +0000 UTC m=+189.658370163" observedRunningTime="2026-03-09 16:28:53.384219913 +0000 UTC m=+190.438189346" watchObservedRunningTime="2026-03-09 16:28:53.390461281 +0000 UTC m=+190.444430704" Mar 09 16:28:53.430960 master-0 kubenswrapper[7604]: I0309 16:28:53.430916 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxm65"] Mar 09 16:28:53.440104 master-0 kubenswrapper[7604]: I0309 16:28:53.440064 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxm65"] Mar 09 16:28:53.468219 master-0 kubenswrapper[7604]: I0309 16:28:53.465533 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp"] Mar 09 16:28:53.470358 master-0 kubenswrapper[7604]: I0309 16:28:53.470270 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-xdqlp"] Mar 09 16:28:53.499401 master-0 kubenswrapper[7604]: I0309 16:28:53.499314 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" podStartSLOduration=6.26003147 podStartE2EDuration="22.499285797s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:28:31.955038455 +0000 UTC m=+169.009007878" lastFinishedPulling="2026-03-09 16:28:48.194292782 +0000 UTC m=+185.248262205" observedRunningTime="2026-03-09 16:28:53.487492231 +0000 UTC m=+190.541461664" watchObservedRunningTime="2026-03-09 16:28:53.499285797 +0000 UTC m=+190.553255220" Mar 09 16:28:53.539362 master-0 kubenswrapper[7604]: I0309 16:28:53.539301 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg"] Mar 09 16:28:53.539585 master-0 kubenswrapper[7604]: E0309 16:28:53.539557 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8946985d-17c1-4617-a678-4d57188fd5e5" containerName="extract-utilities" Mar 09 16:28:53.539585 master-0 kubenswrapper[7604]: I0309 16:28:53.539575 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8946985d-17c1-4617-a678-4d57188fd5e5" containerName="extract-utilities" Mar 09 16:28:53.539679 master-0 kubenswrapper[7604]: E0309 16:28:53.539589 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="machine-approver-controller" Mar 09 16:28:53.539679 master-0 kubenswrapper[7604]: I0309 16:28:53.539596 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="machine-approver-controller" Mar 09 16:28:53.539679 master-0 kubenswrapper[7604]: E0309 16:28:53.539607 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="kube-rbac-proxy" Mar 09 16:28:53.539679 master-0 kubenswrapper[7604]: I0309 16:28:53.539613 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="kube-rbac-proxy" Mar 09 16:28:53.539679 master-0 kubenswrapper[7604]: E0309 16:28:53.539622 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25fcaed-de1e-40d3-8163-61d5d0057bb8" containerName="extract-utilities" Mar 09 16:28:53.539679 master-0 kubenswrapper[7604]: I0309 16:28:53.539627 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25fcaed-de1e-40d3-8163-61d5d0057bb8" containerName="extract-utilities" Mar 09 16:28:53.539840 master-0 kubenswrapper[7604]: I0309 16:28:53.539738 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="8946985d-17c1-4617-a678-4d57188fd5e5" containerName="extract-utilities" Mar 09 16:28:53.539840 master-0 kubenswrapper[7604]: I0309 16:28:53.539753 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="kube-rbac-proxy" Mar 09 16:28:53.539840 master-0 kubenswrapper[7604]: I0309 16:28:53.539762 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="812b321a-943d-4716-a17c-7f805333ef42" containerName="machine-approver-controller" Mar 09 16:28:53.539840 master-0 kubenswrapper[7604]: I0309 16:28:53.539777 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25fcaed-de1e-40d3-8163-61d5d0057bb8" containerName="extract-utilities" Mar 09 16:28:53.540409 master-0 kubenswrapper[7604]: I0309 16:28:53.540379 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.544545 master-0 kubenswrapper[7604]: I0309 16:28:53.543583 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 09 16:28:53.544545 master-0 kubenswrapper[7604]: I0309 16:28:53.543841 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 09 16:28:53.544545 master-0 kubenswrapper[7604]: I0309 16:28:53.543994 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kz284" Mar 09 16:28:53.544545 master-0 kubenswrapper[7604]: I0309 16:28:53.544743 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 09 16:28:53.548004 master-0 kubenswrapper[7604]: I0309 16:28:53.547972 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 09 16:28:53.548192 master-0 kubenswrapper[7604]: I0309 16:28:53.548140 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 09 16:28:53.605453 master-0 kubenswrapper[7604]: I0309 16:28:53.605179 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.605453 master-0 kubenswrapper[7604]: I0309 16:28:53.605262 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.605977 master-0 kubenswrapper[7604]: I0309 16:28:53.605627 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.605977 master-0 kubenswrapper[7604]: I0309 16:28:53.605753 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbkx\" (UniqueName: \"kubernetes.io/projected/3ec3050d-8e6f-466a-995a-f78270408a85-kube-api-access-qsbkx\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.706955 master-0 kubenswrapper[7604]: I0309 16:28:53.706799 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.706955 master-0 kubenswrapper[7604]: I0309 16:28:53.706873 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbkx\" (UniqueName: \"kubernetes.io/projected/3ec3050d-8e6f-466a-995a-f78270408a85-kube-api-access-qsbkx\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.706955 master-0 kubenswrapper[7604]: I0309 16:28:53.706925 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.706955 master-0 kubenswrapper[7604]: I0309 16:28:53.706958 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.709268 master-0 kubenswrapper[7604]: I0309 16:28:53.709231 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.712223 master-0 kubenswrapper[7604]: I0309 16:28:53.709607 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.729825 master-0 kubenswrapper[7604]: I0309 16:28:53.729777 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbkx\" (UniqueName: \"kubernetes.io/projected/3ec3050d-8e6f-466a-995a-f78270408a85-kube-api-access-qsbkx\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.736409 master-0 kubenswrapper[7604]: I0309 16:28:53.736363 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:53.869638 master-0 kubenswrapper[7604]: I0309 16:28:53.869582 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:28:54.261060 master-0 kubenswrapper[7604]: I0309 16:28:54.260962 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="cluster-cloud-controller-manager" containerID="cri-o://a4a0956582684a89e1b7b4f0778806b9ef37e71e42a1b64feda5a71fde3ea4d6" gracePeriod=30 Mar 09 16:28:54.261060 master-0 kubenswrapper[7604]: I0309 16:28:54.261038 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="kube-rbac-proxy" containerID="cri-o://4aba00ad12852a446660c41e3679cb36779a9e833f460f5150a8edd0cdeb5825" gracePeriod=30 Mar 09 16:28:54.261464 master-0 kubenswrapper[7604]: I0309 16:28:54.261071 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="config-sync-controllers" containerID="cri-o://4a254137929f22b40ff0b2bad7179c0ca99ab4b48a4ce338bafb6ae74b824778" gracePeriod=30 Mar 09 16:28:54.585377 master-0 kubenswrapper[7604]: I0309 16:28:54.585214 7604 scope.go:117] "RemoveContainer" containerID="5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836" Mar 09 16:28:55.118046 master-0 kubenswrapper[7604]: I0309 16:28:55.117988 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812b321a-943d-4716-a17c-7f805333ef42" path="/var/lib/kubelet/pods/812b321a-943d-4716-a17c-7f805333ef42/volumes" Mar 09 16:28:55.119085 master-0 kubenswrapper[7604]: I0309 16:28:55.119032 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8946985d-17c1-4617-a678-4d57188fd5e5" path="/var/lib/kubelet/pods/8946985d-17c1-4617-a678-4d57188fd5e5/volumes" Mar 09 16:28:55.119861 master-0 kubenswrapper[7604]: I0309 16:28:55.119817 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25fcaed-de1e-40d3-8163-61d5d0057bb8" path="/var/lib/kubelet/pods/e25fcaed-de1e-40d3-8163-61d5d0057bb8/volumes" Mar 09 16:28:55.269849 master-0 kubenswrapper[7604]: I0309 16:28:55.269574 7604 generic.go:334] "Generic (PLEG): container finished" podID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerID="4aba00ad12852a446660c41e3679cb36779a9e833f460f5150a8edd0cdeb5825" exitCode=0 Mar 09 16:28:55.269849 master-0 kubenswrapper[7604]: I0309 16:28:55.269614 7604 generic.go:334] "Generic (PLEG): container finished" podID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerID="4a254137929f22b40ff0b2bad7179c0ca99ab4b48a4ce338bafb6ae74b824778" exitCode=0 Mar 09 16:28:55.269849 master-0 kubenswrapper[7604]: I0309 16:28:55.269623 7604 generic.go:334] "Generic (PLEG): container finished" podID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerID="a4a0956582684a89e1b7b4f0778806b9ef37e71e42a1b64feda5a71fde3ea4d6" exitCode=0 Mar 09 16:28:55.269849 master-0 kubenswrapper[7604]: I0309 16:28:55.269626 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerDied","Data":"4aba00ad12852a446660c41e3679cb36779a9e833f460f5150a8edd0cdeb5825"} Mar 09 16:28:55.269849 master-0 kubenswrapper[7604]: I0309 16:28:55.269675 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerDied","Data":"4a254137929f22b40ff0b2bad7179c0ca99ab4b48a4ce338bafb6ae74b824778"} Mar 09 16:28:55.269849 master-0 kubenswrapper[7604]: I0309 16:28:55.269685 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerDied","Data":"a4a0956582684a89e1b7b4f0778806b9ef37e71e42a1b64feda5a71fde3ea4d6"} Mar 09 16:28:57.285786 master-0 kubenswrapper[7604]: I0309 16:28:57.285703 7604 generic.go:334] "Generic (PLEG): container finished" podID="8be2517a-6f28-4289-a108-6e3345a1e246" containerID="849641fef697929d82e47cd74e196c87b6f13e825237b99e39d16fe99de91e48" exitCode=0 Mar 09 16:28:57.285786 master-0 kubenswrapper[7604]: I0309 16:28:57.285749 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" event={"ID":"8be2517a-6f28-4289-a108-6e3345a1e246","Type":"ContainerDied","Data":"849641fef697929d82e47cd74e196c87b6f13e825237b99e39d16fe99de91e48"} Mar 09 16:28:57.286827 master-0 kubenswrapper[7604]: I0309 16:28:57.286722 7604 scope.go:117] "RemoveContainer" containerID="849641fef697929d82e47cd74e196c87b6f13e825237b99e39d16fe99de91e48" Mar 09 16:28:57.351561 master-0 kubenswrapper[7604]: I0309 16:28:57.351527 7604 scope.go:117] "RemoveContainer" containerID="fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c" Mar 09 16:28:57.352032 master-0 kubenswrapper[7604]: E0309 16:28:57.351984 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c\": container with ID starting with fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c not found: ID does not exist" containerID="fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c" Mar 09 16:28:57.352099 master-0 kubenswrapper[7604]: I0309 16:28:57.352046 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c"} err="failed to get container status \"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c\": rpc error: code = NotFound desc = could not find container \"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c\": container with ID starting with fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c not found: ID does not exist" Mar 09 16:28:57.352099 master-0 kubenswrapper[7604]: I0309 16:28:57.352079 7604 scope.go:117] "RemoveContainer" containerID="5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836" Mar 09 16:28:57.352639 master-0 kubenswrapper[7604]: E0309 16:28:57.352613 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836\": container with ID starting with 5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836 not found: ID does not exist" containerID="5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836" Mar 09 16:28:57.352716 master-0 kubenswrapper[7604]: I0309 16:28:57.352646 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836"} err="failed to get container status \"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836\": rpc error: code = NotFound desc = could not find container \"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836\": container with ID starting with 5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836 not found: ID does not exist" Mar 09 16:28:57.352716 master-0 kubenswrapper[7604]: I0309 16:28:57.352666 7604 scope.go:117] "RemoveContainer" containerID="fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c" Mar 09 16:28:57.353048 master-0 kubenswrapper[7604]: I0309 16:28:57.353020 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c"} err="failed to get container status \"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c\": rpc error: code = NotFound desc = could not find container \"fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c\": container with ID starting with fcb26f0e5abfb61b847d8939d6f0f758fadaf8370b529587f4c88e9fa107e97c not found: ID does not exist" Mar 09 16:28:57.353118 master-0 kubenswrapper[7604]: I0309 16:28:57.353046 7604 scope.go:117] "RemoveContainer" containerID="5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836" Mar 09 16:28:57.358107 master-0 kubenswrapper[7604]: I0309 16:28:57.358006 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836"} err="failed to get container status \"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836\": rpc error: code = NotFound desc = could not find container \"5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836\": container with ID starting with 5c3b23cdfc4dc812700ffb7591bbc3e1bce8e8b4cfedd0ba99bc9553a8db2836 not found: ID does not exist" Mar 09 16:28:57.358107 master-0 kubenswrapper[7604]: I0309 16:28:57.358103 7604 scope.go:117] "RemoveContainer" containerID="451c7d8d5455531481ef3352e732aa0024b5f270a1ea850ed54740c7f0d0d61c" Mar 09 16:28:57.401070 master-0 kubenswrapper[7604]: I0309 16:28:57.400997 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:57.460321 master-0 kubenswrapper[7604]: I0309 16:28:57.460232 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d65ba99c-ecce-4678-a7dd-457638fb2829-host-etc-kube\") pod \"d65ba99c-ecce-4678-a7dd-457638fb2829\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " Mar 09 16:28:57.460561 master-0 kubenswrapper[7604]: I0309 16:28:57.460379 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-images\") pod \"d65ba99c-ecce-4678-a7dd-457638fb2829\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " Mar 09 16:28:57.460561 master-0 kubenswrapper[7604]: I0309 16:28:57.460386 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d65ba99c-ecce-4678-a7dd-457638fb2829-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "d65ba99c-ecce-4678-a7dd-457638fb2829" (UID: "d65ba99c-ecce-4678-a7dd-457638fb2829"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:28:57.460561 master-0 kubenswrapper[7604]: I0309 16:28:57.460454 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-auth-proxy-config\") pod \"d65ba99c-ecce-4678-a7dd-457638fb2829\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " Mar 09 16:28:57.460661 master-0 kubenswrapper[7604]: I0309 16:28:57.460576 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p62v\" (UniqueName: \"kubernetes.io/projected/d65ba99c-ecce-4678-a7dd-457638fb2829-kube-api-access-5p62v\") pod \"d65ba99c-ecce-4678-a7dd-457638fb2829\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " Mar 09 16:28:57.460661 master-0 kubenswrapper[7604]: I0309 16:28:57.460618 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d65ba99c-ecce-4678-a7dd-457638fb2829-cloud-controller-manager-operator-tls\") pod \"d65ba99c-ecce-4678-a7dd-457638fb2829\" (UID: \"d65ba99c-ecce-4678-a7dd-457638fb2829\") " Mar 09 16:28:57.460965 master-0 kubenswrapper[7604]: I0309 16:28:57.460917 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d65ba99c-ecce-4678-a7dd-457638fb2829" (UID: "d65ba99c-ecce-4678-a7dd-457638fb2829"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:28:57.460965 master-0 kubenswrapper[7604]: I0309 16:28:57.460939 7604 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/d65ba99c-ecce-4678-a7dd-457638fb2829-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:57.460965 master-0 kubenswrapper[7604]: I0309 16:28:57.460949 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-images" (OuterVolumeSpecName: "images") pod "d65ba99c-ecce-4678-a7dd-457638fb2829" (UID: "d65ba99c-ecce-4678-a7dd-457638fb2829"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:28:57.467137 master-0 kubenswrapper[7604]: I0309 16:28:57.467072 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d65ba99c-ecce-4678-a7dd-457638fb2829-kube-api-access-5p62v" (OuterVolumeSpecName: "kube-api-access-5p62v") pod "d65ba99c-ecce-4678-a7dd-457638fb2829" (UID: "d65ba99c-ecce-4678-a7dd-457638fb2829"). InnerVolumeSpecName "kube-api-access-5p62v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:28:57.467649 master-0 kubenswrapper[7604]: I0309 16:28:57.467585 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d65ba99c-ecce-4678-a7dd-457638fb2829-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "d65ba99c-ecce-4678-a7dd-457638fb2829" (UID: "d65ba99c-ecce-4678-a7dd-457638fb2829"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:28:57.564252 master-0 kubenswrapper[7604]: I0309 16:28:57.564064 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p62v\" (UniqueName: \"kubernetes.io/projected/d65ba99c-ecce-4678-a7dd-457638fb2829-kube-api-access-5p62v\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:57.564252 master-0 kubenswrapper[7604]: I0309 16:28:57.564105 7604 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/d65ba99c-ecce-4678-a7dd-457638fb2829-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:57.564252 master-0 kubenswrapper[7604]: I0309 16:28:57.564116 7604 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-images\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:57.564252 master-0 kubenswrapper[7604]: I0309 16:28:57.564128 7604 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d65ba99c-ecce-4678-a7dd-457638fb2829-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:28:58.297376 master-0 kubenswrapper[7604]: I0309 16:28:58.296964 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:28:58.306392 master-0 kubenswrapper[7604]: I0309 16:28:58.305062 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" event={"ID":"d65ba99c-ecce-4678-a7dd-457638fb2829","Type":"ContainerDied","Data":"0161f699c4304c5986c2b7c9bed720ea6736224b8c4d779a21133488f92f2331"} Mar 09 16:28:58.306392 master-0 kubenswrapper[7604]: I0309 16:28:58.305171 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq" Mar 09 16:28:59.176284 master-0 kubenswrapper[7604]: I0309 16:28:59.176204 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:59.176284 master-0 kubenswrapper[7604]: I0309 16:28:59.176275 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:59.216281 master-0 kubenswrapper[7604]: I0309 16:28:59.216216 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303017 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd"] Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: E0309 16:28:59.303234 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="kube-rbac-proxy" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303246 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="kube-rbac-proxy" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: E0309 16:28:59.303267 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="cluster-cloud-controller-manager" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303274 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="cluster-cloud-controller-manager" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: E0309 16:28:59.303281 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="config-sync-controllers" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303290 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="config-sync-controllers" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303399 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="config-sync-controllers" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303413 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="kube-rbac-proxy" Mar 09 16:28:59.304637 master-0 kubenswrapper[7604]: I0309 16:28:59.303444 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" containerName="cluster-cloud-controller-manager" Mar 09 16:28:59.305530 master-0 kubenswrapper[7604]: I0309 16:28:59.305041 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.306723 master-0 kubenswrapper[7604]: I0309 16:28:59.306701 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-nzqc5" Mar 09 16:28:59.307036 master-0 kubenswrapper[7604]: I0309 16:28:59.307023 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 09 16:28:59.346368 master-0 kubenswrapper[7604]: I0309 16:28:59.346324 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:28:59.400503 master-0 kubenswrapper[7604]: I0309 16:28:59.400371 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8972b380-8f87-4b73-8f95-440d34d03884-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.400712 master-0 kubenswrapper[7604]: I0309 16:28:59.400544 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwnd\" (UniqueName: \"kubernetes.io/projected/8972b380-8f87-4b73-8f95-440d34d03884-kube-api-access-8hwnd\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.400712 master-0 kubenswrapper[7604]: I0309 16:28:59.400603 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.502338 master-0 kubenswrapper[7604]: I0309 16:28:59.502259 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwnd\" (UniqueName: \"kubernetes.io/projected/8972b380-8f87-4b73-8f95-440d34d03884-kube-api-access-8hwnd\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.502682 master-0 kubenswrapper[7604]: I0309 16:28:59.502474 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.503125 master-0 kubenswrapper[7604]: I0309 16:28:59.503062 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8972b380-8f87-4b73-8f95-440d34d03884-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.503578 master-0 kubenswrapper[7604]: I0309 16:28:59.503539 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8972b380-8f87-4b73-8f95-440d34d03884-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.505877 master-0 kubenswrapper[7604]: I0309 16:28:59.505762 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.800563 master-0 kubenswrapper[7604]: I0309 16:28:59.799121 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd"] Mar 09 16:28:59.831477 master-0 kubenswrapper[7604]: I0309 16:28:59.823999 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwnd\" (UniqueName: \"kubernetes.io/projected/8972b380-8f87-4b73-8f95-440d34d03884-kube-api-access-8hwnd\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.923698 master-0 kubenswrapper[7604]: I0309 16:28:59.923604 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:28:59.933711 master-0 kubenswrapper[7604]: I0309 16:28:59.933670 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:59.933990 master-0 kubenswrapper[7604]: I0309 16:28:59.933959 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:28:59.966251 master-0 kubenswrapper[7604]: I0309 16:28:59.966177 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:29:00.354115 master-0 kubenswrapper[7604]: I0309 16:29:00.354052 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:29:01.834770 master-0 kubenswrapper[7604]: I0309 16:29:01.834697 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq"] Mar 09 16:29:02.327031 master-0 kubenswrapper[7604]: I0309 16:29:02.326970 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" event={"ID":"3ec3050d-8e6f-466a-995a-f78270408a85","Type":"ContainerStarted","Data":"6bef13556b054eeec06112dd3efb63b9b2d0c3aa5b54369f3f112afc33fa6fa0"} Mar 09 16:29:03.312177 master-0 kubenswrapper[7604]: I0309 16:29:03.311821 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-59pzq"] Mar 09 16:29:05.118308 master-0 kubenswrapper[7604]: I0309 16:29:05.118254 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d65ba99c-ecce-4678-a7dd-457638fb2829" path="/var/lib/kubelet/pods/d65ba99c-ecce-4678-a7dd-457638fb2829/volumes" Mar 09 16:29:07.353982 master-0 kubenswrapper[7604]: I0309 16:29:07.353876 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6"] Mar 09 16:29:07.354956 master-0 kubenswrapper[7604]: I0309 16:29:07.354926 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.358493 master-0 kubenswrapper[7604]: I0309 16:29:07.357894 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 09 16:29:07.358493 master-0 kubenswrapper[7604]: I0309 16:29:07.357977 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 09 16:29:07.358493 master-0 kubenswrapper[7604]: I0309 16:29:07.358022 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:29:07.358493 master-0 kubenswrapper[7604]: I0309 16:29:07.358062 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:29:07.358493 master-0 kubenswrapper[7604]: I0309 16:29:07.358120 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-gkx8f" Mar 09 16:29:07.358493 master-0 kubenswrapper[7604]: I0309 16:29:07.358120 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 09 16:29:07.410022 master-0 kubenswrapper[7604]: I0309 16:29:07.409963 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.410236 master-0 kubenswrapper[7604]: I0309 16:29:07.410036 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.410236 master-0 kubenswrapper[7604]: I0309 16:29:07.410070 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.410236 master-0 kubenswrapper[7604]: I0309 16:29:07.410209 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ea34ff7e-27fa-4c26-acc0-ec551985eb76-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.410348 master-0 kubenswrapper[7604]: I0309 16:29:07.410271 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl697\" (UniqueName: \"kubernetes.io/projected/ea34ff7e-27fa-4c26-acc0-ec551985eb76-kube-api-access-fl697\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.511664 master-0 kubenswrapper[7604]: I0309 16:29:07.511598 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ea34ff7e-27fa-4c26-acc0-ec551985eb76-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.511866 master-0 kubenswrapper[7604]: I0309 16:29:07.511657 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl697\" (UniqueName: \"kubernetes.io/projected/ea34ff7e-27fa-4c26-acc0-ec551985eb76-kube-api-access-fl697\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.511866 master-0 kubenswrapper[7604]: I0309 16:29:07.511722 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.511866 master-0 kubenswrapper[7604]: I0309 16:29:07.511755 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ea34ff7e-27fa-4c26-acc0-ec551985eb76-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.511866 master-0 kubenswrapper[7604]: I0309 16:29:07.511770 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.511866 master-0 kubenswrapper[7604]: I0309 16:29:07.511832 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.513462 master-0 kubenswrapper[7604]: I0309 16:29:07.512412 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.513462 master-0 kubenswrapper[7604]: I0309 16:29:07.512608 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:07.514680 master-0 kubenswrapper[7604]: I0309 16:29:07.514661 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:08.507562 master-0 kubenswrapper[7604]: I0309 16:29:08.503374 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl697\" (UniqueName: \"kubernetes.io/projected/ea34ff7e-27fa-4c26-acc0-ec551985eb76-kube-api-access-fl697\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:08.574510 master-0 kubenswrapper[7604]: I0309 16:29:08.573266 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:29:11.892479 master-0 kubenswrapper[7604]: I0309 16:29:11.891949 7604 scope.go:117] "RemoveContainer" containerID="b1b3ef7bb6ad7f9db884177c5218a9385fb4d8fc64928b99c59cc91517299920" Mar 09 16:29:15.355529 master-0 kubenswrapper[7604]: I0309 16:29:15.355005 7604 scope.go:117] "RemoveContainer" containerID="4aba00ad12852a446660c41e3679cb36779a9e833f460f5150a8edd0cdeb5825" Mar 09 16:29:15.383632 master-0 kubenswrapper[7604]: I0309 16:29:15.383602 7604 scope.go:117] "RemoveContainer" containerID="4a254137929f22b40ff0b2bad7179c0ca99ab4b48a4ce338bafb6ae74b824778" Mar 09 16:29:15.472751 master-0 kubenswrapper[7604]: I0309 16:29:15.472690 7604 scope.go:117] "RemoveContainer" containerID="a4a0956582684a89e1b7b4f0778806b9ef37e71e42a1b64feda5a71fde3ea4d6" Mar 09 16:29:15.954349 master-0 kubenswrapper[7604]: I0309 16:29:15.954284 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd"] Mar 09 16:29:15.987571 master-0 kubenswrapper[7604]: W0309 16:29:15.987505 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8972b380_8f87_4b73_8f95_440d34d03884.slice/crio-35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460 WatchSource:0}: Error finding container 35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460: Status 404 returned error can't find the container with id 35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460 Mar 09 16:29:16.434654 master-0 kubenswrapper[7604]: I0309 16:29:16.434591 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" event={"ID":"a320d845-3a5d-4027-a765-f0b2dc07f9de","Type":"ContainerStarted","Data":"d8bfd57294fc695c76b8af4578c39789fb2f27137f1950cb115ef500dca01244"} Mar 09 16:29:16.436474 master-0 kubenswrapper[7604]: I0309 16:29:16.436361 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" event={"ID":"a6cd9347-eec9-4549-9de4-6033112634ce","Type":"ContainerStarted","Data":"4a72ada443de84c13a8cbe47843e972a9ed55f3d914623df43cbb70dacd90962"} Mar 09 16:29:16.437930 master-0 kubenswrapper[7604]: I0309 16:29:16.437892 7604 generic.go:334] "Generic (PLEG): container finished" podID="be856881-2ceb-4803-8330-4a27ad8b1937" containerID="91eeee0f78c7370b1376450e55648943d546edf775431451383fe45a76895603" exitCode=0 Mar 09 16:29:16.438105 master-0 kubenswrapper[7604]: I0309 16:29:16.437952 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrg" event={"ID":"be856881-2ceb-4803-8330-4a27ad8b1937","Type":"ContainerDied","Data":"91eeee0f78c7370b1376450e55648943d546edf775431451383fe45a76895603"} Mar 09 16:29:16.439652 master-0 kubenswrapper[7604]: I0309 16:29:16.439616 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" event={"ID":"8be2517a-6f28-4289-a108-6e3345a1e246","Type":"ContainerStarted","Data":"b75b2c20c3cfa0c51091b8718dd34df86d951c2d2ac1cd6cd940abb625f7fab0"} Mar 09 16:29:16.441442 master-0 kubenswrapper[7604]: I0309 16:29:16.441380 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" event={"ID":"3ec3050d-8e6f-466a-995a-f78270408a85","Type":"ContainerStarted","Data":"2045c91c077228b5fc52cbacb88317be3538b9cb4ff34112c6659345b8d1fd77"} Mar 09 16:29:16.441505 master-0 kubenswrapper[7604]: I0309 16:29:16.441449 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" event={"ID":"3ec3050d-8e6f-466a-995a-f78270408a85","Type":"ContainerStarted","Data":"0af277e702a0adc3e46b1464b0173222a4a6c025d38573dcd093c30919ad94fc"} Mar 09 16:29:16.444327 master-0 kubenswrapper[7604]: I0309 16:29:16.444246 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" event={"ID":"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a","Type":"ContainerStarted","Data":"e8cb30c90125a1e3b3eb6f6752eb090667969ca7a1ad05a2f50043a22d1558b3"} Mar 09 16:29:16.445580 master-0 kubenswrapper[7604]: I0309 16:29:16.445531 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerStarted","Data":"cd71269592a701160cbe606bc3b5a764b96e0af9d702d7660f9fc5b18a628065"} Mar 09 16:29:16.445580 master-0 kubenswrapper[7604]: I0309 16:29:16.445571 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerStarted","Data":"16090dface4ebfac4ce59503c1b97e63c47315ed98b676af9cb614a7646af5db"} Mar 09 16:29:16.447584 master-0 kubenswrapper[7604]: I0309 16:29:16.447537 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" event={"ID":"8972b380-8f87-4b73-8f95-440d34d03884","Type":"ContainerStarted","Data":"478050fc5a610db3a7ffbb70974c16fcbc1a3e86ff4bd2cba7f1c2f94f7b4a39"} Mar 09 16:29:16.447584 master-0 kubenswrapper[7604]: I0309 16:29:16.447584 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" event={"ID":"8972b380-8f87-4b73-8f95-440d34d03884","Type":"ContainerStarted","Data":"35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460"} Mar 09 16:29:16.449052 master-0 kubenswrapper[7604]: I0309 16:29:16.448990 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sj6x9" event={"ID":"5587e967-124e-4f2a-b7fb-42cb16bfc337","Type":"ContainerStarted","Data":"7ff8157bffba3145f69b3198264b84a722cba81689d70911da1b9b204c01aa11"} Mar 09 16:29:16.449127 master-0 kubenswrapper[7604]: I0309 16:29:16.449065 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sj6x9" event={"ID":"5587e967-124e-4f2a-b7fb-42cb16bfc337","Type":"ContainerStarted","Data":"06ace20f841c3248167e7dc9d3aa11aecdcde5982710cfe10b8a6c3ed191d324"} Mar 09 16:29:16.449127 master-0 kubenswrapper[7604]: I0309 16:29:16.449093 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sj6x9" Mar 09 16:29:16.451064 master-0 kubenswrapper[7604]: I0309 16:29:16.450934 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49bwx" event={"ID":"1da6f189-535a-4bf1-bbdb-758327651ae2","Type":"ContainerStarted","Data":"32286fc29ff0c774f7955c0ba49c91530fb15cf50845d1f7c12e2c8a6cdabfca"} Mar 09 16:29:17.458338 master-0 kubenswrapper[7604]: I0309 16:29:17.458285 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" event={"ID":"8972b380-8f87-4b73-8f95-440d34d03884","Type":"ContainerStarted","Data":"0b170eed65b91f8dffe98fe9c668fb8c6d414f40f41745fbc08f9484824749c2"} Mar 09 16:29:17.460401 master-0 kubenswrapper[7604]: I0309 16:29:17.460350 7604 generic.go:334] "Generic (PLEG): container finished" podID="1da6f189-535a-4bf1-bbdb-758327651ae2" containerID="32286fc29ff0c774f7955c0ba49c91530fb15cf50845d1f7c12e2c8a6cdabfca" exitCode=0 Mar 09 16:29:17.460553 master-0 kubenswrapper[7604]: I0309 16:29:17.460488 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49bwx" event={"ID":"1da6f189-535a-4bf1-bbdb-758327651ae2","Type":"ContainerDied","Data":"32286fc29ff0c774f7955c0ba49c91530fb15cf50845d1f7c12e2c8a6cdabfca"} Mar 09 16:29:17.462861 master-0 kubenswrapper[7604]: I0309 16:29:17.462817 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerStarted","Data":"6bdedb0d309f6a7857987d14453fb1d08cc3f0b26b4d4d78a41c37d0918629b7"} Mar 09 16:29:17.462861 master-0 kubenswrapper[7604]: I0309 16:29:17.462845 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerStarted","Data":"39d1c81df8c0e375db5e92a2da393b888f722383ebb7782e3b3f53c06fee366b"} Mar 09 16:29:17.633111 master-0 kubenswrapper[7604]: I0309 16:29:17.633020 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" podStartSLOduration=20.851614666 podStartE2EDuration="44.632995724s" podCreationTimestamp="2026-03-09 16:28:33 +0000 UTC" firstStartedPulling="2026-03-09 16:28:51.703850872 +0000 UTC m=+188.757820305" lastFinishedPulling="2026-03-09 16:29:15.48523194 +0000 UTC m=+212.539201363" observedRunningTime="2026-03-09 16:29:16.90413348 +0000 UTC m=+213.958102913" watchObservedRunningTime="2026-03-09 16:29:17.632995724 +0000 UTC m=+214.686965147" Mar 09 16:29:18.073996 master-0 kubenswrapper[7604]: I0309 16:29:18.068540 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" podStartSLOduration=20.822907399 podStartE2EDuration="45.068516024s" podCreationTimestamp="2026-03-09 16:28:33 +0000 UTC" firstStartedPulling="2026-03-09 16:28:51.375223496 +0000 UTC m=+188.429192919" lastFinishedPulling="2026-03-09 16:29:15.620832121 +0000 UTC m=+212.674801544" observedRunningTime="2026-03-09 16:29:18.038794787 +0000 UTC m=+215.092764220" watchObservedRunningTime="2026-03-09 16:29:18.068516024 +0000 UTC m=+215.122485447" Mar 09 16:29:18.086337 master-0 kubenswrapper[7604]: I0309 16:29:18.086175 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-rvnwf"] Mar 09 16:29:18.087376 master-0 kubenswrapper[7604]: I0309 16:29:18.087131 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.091650 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.092015 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-d68b9" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.092201 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.092479 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.092553 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.092769 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 09 16:29:18.096098 master-0 kubenswrapper[7604]: I0309 16:29:18.095947 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 09 16:29:18.099367 master-0 kubenswrapper[7604]: I0309 16:29:18.099247 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv"] Mar 09 16:29:18.101639 master-0 kubenswrapper[7604]: I0309 16:29:18.100266 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:18.104292 master-0 kubenswrapper[7604]: I0309 16:29:18.104234 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 09 16:29:18.104457 master-0 kubenswrapper[7604]: I0309 16:29:18.104376 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-xhwgr" Mar 09 16:29:18.113481 master-0 kubenswrapper[7604]: I0309 16:29:18.106082 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" podStartSLOduration=25.106063457 podStartE2EDuration="25.106063457s" podCreationTimestamp="2026-03-09 16:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:29:18.07290458 +0000 UTC m=+215.126874003" watchObservedRunningTime="2026-03-09 16:29:18.106063457 +0000 UTC m=+215.160032880" Mar 09 16:29:18.113481 master-0 kubenswrapper[7604]: I0309 16:29:18.110746 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb"] Mar 09 16:29:18.117765 master-0 kubenswrapper[7604]: I0309 16:29:18.113964 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:29:18.168461 master-0 kubenswrapper[7604]: I0309 16:29:18.159713 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb"] Mar 09 16:29:18.168461 master-0 kubenswrapper[7604]: I0309 16:29:18.167544 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv"] Mar 09 16:29:18.185495 master-0 kubenswrapper[7604]: I0309 16:29:18.169273 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" podStartSLOduration=38.455388767 podStartE2EDuration="45.1692589s" podCreationTimestamp="2026-03-09 16:28:33 +0000 UTC" firstStartedPulling="2026-03-09 16:28:50.649309181 +0000 UTC m=+187.703278604" lastFinishedPulling="2026-03-09 16:28:57.363179314 +0000 UTC m=+194.417148737" observedRunningTime="2026-03-09 16:29:18.100954381 +0000 UTC m=+215.154923824" watchObservedRunningTime="2026-03-09 16:29:18.1692589 +0000 UTC m=+215.223228323" Mar 09 16:29:18.185495 master-0 kubenswrapper[7604]: I0309 16:29:18.175674 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sj6x9" podStartSLOduration=26.123589612 podStartE2EDuration="50.175658613s" podCreationTimestamp="2026-03-09 16:28:28 +0000 UTC" firstStartedPulling="2026-03-09 16:28:51.275683444 +0000 UTC m=+188.329652867" lastFinishedPulling="2026-03-09 16:29:15.327752445 +0000 UTC m=+212.381721868" observedRunningTime="2026-03-09 16:29:18.130817163 +0000 UTC m=+215.184786606" watchObservedRunningTime="2026-03-09 16:29:18.175658613 +0000 UTC m=+215.229628056" Mar 09 16:29:18.208295 master-0 kubenswrapper[7604]: I0309 16:29:18.196897 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" podStartSLOduration=20.196874278 podStartE2EDuration="20.196874278s" podCreationTimestamp="2026-03-09 16:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:29:18.19516841 +0000 UTC m=+215.249137853" watchObservedRunningTime="2026-03-09 16:29:18.196874278 +0000 UTC m=+215.250843691" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295304 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295369 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjf4p\" (UniqueName: \"kubernetes.io/projected/9482fb93-c223-45ee-bde8-7667303270b6-kube-api-access-qjf4p\") pod \"network-check-source-7c67b67d47-d9wjb\" (UID: \"9482fb93-c223-45ee-bde8-7667303270b6\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295416 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295598 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlnq\" (UniqueName: \"kubernetes.io/projected/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-kube-api-access-dxlnq\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295628 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kdqvv\" (UID: \"e346cb5b-411d-4014-a8d0-590d8deee8ac\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295660 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.295762 master-0 kubenswrapper[7604]: I0309 16:29:18.295693 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.334077 master-0 kubenswrapper[7604]: I0309 16:29:18.333980 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" podStartSLOduration=13.333954381 podStartE2EDuration="13.333954381s" podCreationTimestamp="2026-03-09 16:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:29:18.33005971 +0000 UTC m=+215.384029143" watchObservedRunningTime="2026-03-09 16:29:18.333954381 +0000 UTC m=+215.387923804" Mar 09 16:29:18.401304 master-0 kubenswrapper[7604]: I0309 16:29:18.401208 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.401546 master-0 kubenswrapper[7604]: I0309 16:29:18.401338 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.401546 master-0 kubenswrapper[7604]: I0309 16:29:18.401387 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjf4p\" (UniqueName: \"kubernetes.io/projected/9482fb93-c223-45ee-bde8-7667303270b6-kube-api-access-qjf4p\") pod \"network-check-source-7c67b67d47-d9wjb\" (UID: \"9482fb93-c223-45ee-bde8-7667303270b6\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:29:18.401546 master-0 kubenswrapper[7604]: I0309 16:29:18.401462 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.401546 master-0 kubenswrapper[7604]: I0309 16:29:18.401508 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlnq\" (UniqueName: \"kubernetes.io/projected/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-kube-api-access-dxlnq\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.401546 master-0 kubenswrapper[7604]: I0309 16:29:18.401535 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kdqvv\" (UID: \"e346cb5b-411d-4014-a8d0-590d8deee8ac\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:18.401703 master-0 kubenswrapper[7604]: I0309 16:29:18.401567 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.402441 master-0 kubenswrapper[7604]: I0309 16:29:18.402373 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.415125 master-0 kubenswrapper[7604]: I0309 16:29:18.415059 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.415328 master-0 kubenswrapper[7604]: I0309 16:29:18.415154 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.428359 master-0 kubenswrapper[7604]: I0309 16:29:18.426560 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kdqvv\" (UID: \"e346cb5b-411d-4014-a8d0-590d8deee8ac\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:18.442033 master-0 kubenswrapper[7604]: I0309 16:29:18.441896 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:18.442376 master-0 kubenswrapper[7604]: I0309 16:29:18.441925 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.444408 master-0 kubenswrapper[7604]: I0309 16:29:18.444350 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlnq\" (UniqueName: \"kubernetes.io/projected/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-kube-api-access-dxlnq\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.445318 master-0 kubenswrapper[7604]: I0309 16:29:18.445270 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjf4p\" (UniqueName: \"kubernetes.io/projected/9482fb93-c223-45ee-bde8-7667303270b6-kube-api-access-qjf4p\") pod \"network-check-source-7c67b67d47-d9wjb\" (UID: \"9482fb93-c223-45ee-bde8-7667303270b6\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:29:18.466295 master-0 kubenswrapper[7604]: I0309 16:29:18.466243 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:29:18.470415 master-0 kubenswrapper[7604]: I0309 16:29:18.470369 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcvrg" event={"ID":"be856881-2ceb-4803-8330-4a27ad8b1937","Type":"ContainerStarted","Data":"573844554df6c9f312bbcbb5ee95b26b362c6d2077424969865cfe0f6054802e"} Mar 09 16:29:18.580522 master-0 kubenswrapper[7604]: I0309 16:29:18.576800 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcvrg" podStartSLOduration=14.747144243 podStartE2EDuration="40.576778052s" podCreationTimestamp="2026-03-09 16:28:38 +0000 UTC" firstStartedPulling="2026-03-09 16:28:52.116539704 +0000 UTC m=+189.170509137" lastFinishedPulling="2026-03-09 16:29:17.946173523 +0000 UTC m=+215.000142946" observedRunningTime="2026-03-09 16:29:18.576093063 +0000 UTC m=+215.630062506" watchObservedRunningTime="2026-03-09 16:29:18.576778052 +0000 UTC m=+215.630747475" Mar 09 16:29:18.724869 master-0 kubenswrapper[7604]: I0309 16:29:18.724809 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:18.948539 master-0 kubenswrapper[7604]: I0309 16:29:18.948487 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv"] Mar 09 16:29:18.955972 master-0 kubenswrapper[7604]: W0309 16:29:18.953635 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode346cb5b_411d_4014_a8d0_590d8deee8ac.slice/crio-49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04 WatchSource:0}: Error finding container 49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04: Status 404 returned error can't find the container with id 49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04 Mar 09 16:29:18.964168 master-0 kubenswrapper[7604]: I0309 16:29:18.964104 7604 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:29:18.997861 master-0 kubenswrapper[7604]: I0309 16:29:18.997731 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb"] Mar 09 16:29:19.199885 master-0 kubenswrapper[7604]: I0309 16:29:19.199753 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:29:19.199885 master-0 kubenswrapper[7604]: I0309 16:29:19.199808 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:29:19.485573 master-0 kubenswrapper[7604]: I0309 16:29:19.484955 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" event={"ID":"e346cb5b-411d-4014-a8d0-590d8deee8ac","Type":"ContainerStarted","Data":"49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04"} Mar 09 16:29:19.501796 master-0 kubenswrapper[7604]: I0309 16:29:19.501547 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" event={"ID":"9482fb93-c223-45ee-bde8-7667303270b6","Type":"ContainerStarted","Data":"e7cf2f479335501ee9acfead9ab1240c3e4d90a810b6ca98bf86ac04646af782"} Mar 09 16:29:19.501796 master-0 kubenswrapper[7604]: I0309 16:29:19.501650 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" event={"ID":"9482fb93-c223-45ee-bde8-7667303270b6","Type":"ContainerStarted","Data":"c400ace13e0290ea978d90a75cda129235df657b46ef5808d10268996d05129a"} Mar 09 16:29:19.507199 master-0 kubenswrapper[7604]: I0309 16:29:19.507124 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"9dc2251ac339285f7e616265d59b743eecae28fcec97875a6787ff662520db27"} Mar 09 16:29:19.514723 master-0 kubenswrapper[7604]: I0309 16:29:19.514595 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49bwx" event={"ID":"1da6f189-535a-4bf1-bbdb-758327651ae2","Type":"ContainerStarted","Data":"f646b343ced9e7e19e326d10730bdd70568232833b7161aab3f880b0d97ac338"} Mar 09 16:29:19.521180 master-0 kubenswrapper[7604]: I0309 16:29:19.521040 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" podStartSLOduration=274.521012983 podStartE2EDuration="4m34.521012983s" podCreationTimestamp="2026-03-09 16:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:29:19.517217215 +0000 UTC m=+216.571186648" watchObservedRunningTime="2026-03-09 16:29:19.521012983 +0000 UTC m=+216.574982406" Mar 09 16:29:19.553622 master-0 kubenswrapper[7604]: I0309 16:29:19.551020 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49bwx" podStartSLOduration=14.443412137 podStartE2EDuration="40.55099929s" podCreationTimestamp="2026-03-09 16:28:39 +0000 UTC" firstStartedPulling="2026-03-09 16:28:52.137024478 +0000 UTC m=+189.190993901" lastFinishedPulling="2026-03-09 16:29:18.244611631 +0000 UTC m=+215.298581054" observedRunningTime="2026-03-09 16:29:19.547748066 +0000 UTC m=+216.601717499" watchObservedRunningTime="2026-03-09 16:29:19.55099929 +0000 UTC m=+216.604968713" Mar 09 16:29:19.800033 master-0 kubenswrapper[7604]: I0309 16:29:19.799587 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:29:19.800033 master-0 kubenswrapper[7604]: I0309 16:29:19.799857 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:29:20.260780 master-0 kubenswrapper[7604]: I0309 16:29:20.260695 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zcvrg" podUID="be856881-2ceb-4803-8330-4a27ad8b1937" containerName="registry-server" probeResult="failure" output=< Mar 09 16:29:20.260780 master-0 kubenswrapper[7604]: timeout: failed to connect service ":50051" within 1s Mar 09 16:29:20.260780 master-0 kubenswrapper[7604]: > Mar 09 16:29:20.847844 master-0 kubenswrapper[7604]: I0309 16:29:20.847725 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49bwx" podUID="1da6f189-535a-4bf1-bbdb-758327651ae2" containerName="registry-server" probeResult="failure" output=< Mar 09 16:29:20.847844 master-0 kubenswrapper[7604]: timeout: failed to connect service ":50051" within 1s Mar 09 16:29:20.847844 master-0 kubenswrapper[7604]: > Mar 09 16:29:21.448944 master-0 kubenswrapper[7604]: I0309 16:29:21.448831 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-7d5bx"] Mar 09 16:29:21.450440 master-0 kubenswrapper[7604]: I0309 16:29:21.450298 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.453614 master-0 kubenswrapper[7604]: I0309 16:29:21.453320 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 09 16:29:21.453740 master-0 kubenswrapper[7604]: I0309 16:29:21.453624 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-mwccd" Mar 09 16:29:21.454590 master-0 kubenswrapper[7604]: I0309 16:29:21.453993 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 09 16:29:21.481615 master-0 kubenswrapper[7604]: I0309 16:29:21.481506 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrms4\" (UniqueName: \"kubernetes.io/projected/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-kube-api-access-rrms4\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.482206 master-0 kubenswrapper[7604]: I0309 16:29:21.481655 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.482206 master-0 kubenswrapper[7604]: I0309 16:29:21.481831 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.583289 master-0 kubenswrapper[7604]: I0309 16:29:21.583233 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.583537 master-0 kubenswrapper[7604]: I0309 16:29:21.583297 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrms4\" (UniqueName: \"kubernetes.io/projected/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-kube-api-access-rrms4\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.583537 master-0 kubenswrapper[7604]: I0309 16:29:21.583331 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.588850 master-0 kubenswrapper[7604]: I0309 16:29:21.587046 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.600523 master-0 kubenswrapper[7604]: I0309 16:29:21.600456 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.605335 master-0 kubenswrapper[7604]: I0309 16:29:21.605297 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrms4\" (UniqueName: \"kubernetes.io/projected/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-kube-api-access-rrms4\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:21.774554 master-0 kubenswrapper[7604]: I0309 16:29:21.774081 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:29:24.556443 master-0 kubenswrapper[7604]: I0309 16:29:24.556374 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7d5bx" event={"ID":"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9","Type":"ContainerStarted","Data":"e68e729dc16b7303d9fa69af7f0d39f2249d9f66e6c9ceb43ec2254fd7af17fe"} Mar 09 16:29:25.563083 master-0 kubenswrapper[7604]: I0309 16:29:25.563017 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-7d5bx" event={"ID":"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9","Type":"ContainerStarted","Data":"61749e27d392271307f2120930c9d7fc765dbd3dcb58503d52c6e97ebedf837b"} Mar 09 16:29:25.564259 master-0 kubenswrapper[7604]: I0309 16:29:25.564197 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" event={"ID":"e346cb5b-411d-4014-a8d0-590d8deee8ac","Type":"ContainerStarted","Data":"0fca5658277cc8aa739718532c11d78647e00002dee9c8fc36f004fe5cecd41b"} Mar 09 16:29:25.564475 master-0 kubenswrapper[7604]: I0309 16:29:25.564447 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:25.568959 master-0 kubenswrapper[7604]: I0309 16:29:25.568914 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:29:27.039753 master-0 kubenswrapper[7604]: I0309 16:29:27.039495 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sj6x9" Mar 09 16:29:28.495918 master-0 kubenswrapper[7604]: I0309 16:29:28.495777 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-7d5bx" podStartSLOduration=7.495744308 podStartE2EDuration="7.495744308s" podCreationTimestamp="2026-03-09 16:29:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:29:27.432029206 +0000 UTC m=+224.485998649" watchObservedRunningTime="2026-03-09 16:29:28.495744308 +0000 UTC m=+225.549713741" Mar 09 16:29:28.584077 master-0 kubenswrapper[7604]: I0309 16:29:28.583990 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"93efe9411f2e38cd517ba36a435f06b5ae09ea631b8beedeb3e3a210ec78c7fe"} Mar 09 16:29:28.725642 master-0 kubenswrapper[7604]: I0309 16:29:28.725570 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:28.725642 master-0 kubenswrapper[7604]: I0309 16:29:28.725648 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:29:28.727953 master-0 kubenswrapper[7604]: I0309 16:29:28.727911 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:28.727953 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:28.727953 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:28.727953 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:28.728144 master-0 kubenswrapper[7604]: I0309 16:29:28.727972 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:29.231745 master-0 kubenswrapper[7604]: I0309 16:29:29.231669 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:29:29.259055 master-0 kubenswrapper[7604]: I0309 16:29:29.256830 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" podStartSLOduration=51.135352698 podStartE2EDuration="56.25680614s" podCreationTimestamp="2026-03-09 16:28:33 +0000 UTC" firstStartedPulling="2026-03-09 16:29:18.956329995 +0000 UTC m=+216.010299418" lastFinishedPulling="2026-03-09 16:29:24.077783437 +0000 UTC m=+221.131752860" observedRunningTime="2026-03-09 16:29:29.256641276 +0000 UTC m=+226.310610719" watchObservedRunningTime="2026-03-09 16:29:29.25680614 +0000 UTC m=+226.310775573" Mar 09 16:29:29.282774 master-0 kubenswrapper[7604]: I0309 16:29:29.282720 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:29:29.728119 master-0 kubenswrapper[7604]: I0309 16:29:29.728031 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:29.728119 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:29.728119 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:29.728119 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:29.728119 master-0 kubenswrapper[7604]: I0309 16:29:29.728111 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:29.840063 master-0 kubenswrapper[7604]: I0309 16:29:29.839990 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:29:29.902035 master-0 kubenswrapper[7604]: I0309 16:29:29.901956 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:29:30.731208 master-0 kubenswrapper[7604]: I0309 16:29:30.731101 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:30.731208 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:30.731208 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:30.731208 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:30.732016 master-0 kubenswrapper[7604]: I0309 16:29:30.731216 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:31.727727 master-0 kubenswrapper[7604]: I0309 16:29:31.727632 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:31.727727 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:31.727727 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:31.727727 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:31.727727 master-0 kubenswrapper[7604]: I0309 16:29:31.727709 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:32.487738 master-0 kubenswrapper[7604]: I0309 16:29:32.487650 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podStartSLOduration=52.687004782 podStartE2EDuration="1m1.487621197s" podCreationTimestamp="2026-03-09 16:28:31 +0000 UTC" firstStartedPulling="2026-03-09 16:29:18.755313448 +0000 UTC m=+215.809282871" lastFinishedPulling="2026-03-09 16:29:27.555929863 +0000 UTC m=+224.609899286" observedRunningTime="2026-03-09 16:29:32.484959611 +0000 UTC m=+229.538929054" watchObservedRunningTime="2026-03-09 16:29:32.487621197 +0000 UTC m=+229.541590620" Mar 09 16:29:32.729412 master-0 kubenswrapper[7604]: I0309 16:29:32.729282 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:32.729412 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:32.729412 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:32.729412 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:32.729815 master-0 kubenswrapper[7604]: I0309 16:29:32.729451 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:33.728077 master-0 kubenswrapper[7604]: I0309 16:29:33.727979 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:33.728077 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:33.728077 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:33.728077 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:33.728077 master-0 kubenswrapper[7604]: I0309 16:29:33.728075 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:34.728136 master-0 kubenswrapper[7604]: I0309 16:29:34.728069 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:34.728136 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:34.728136 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:34.728136 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:34.728136 master-0 kubenswrapper[7604]: I0309 16:29:34.728136 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:35.728822 master-0 kubenswrapper[7604]: I0309 16:29:35.728715 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:35.728822 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:35.728822 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:35.728822 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:35.729657 master-0 kubenswrapper[7604]: I0309 16:29:35.728863 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:36.728932 master-0 kubenswrapper[7604]: I0309 16:29:36.728833 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:36.728932 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:36.728932 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:36.728932 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:36.729805 master-0 kubenswrapper[7604]: I0309 16:29:36.728950 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:36.979155 master-0 kubenswrapper[7604]: I0309 16:29:36.978978 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v"] Mar 09 16:29:36.979960 master-0 kubenswrapper[7604]: I0309 16:29:36.979923 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:36.982204 master-0 kubenswrapper[7604]: I0309 16:29:36.982152 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 09 16:29:36.982546 master-0 kubenswrapper[7604]: I0309 16:29:36.982517 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-n45mc" Mar 09 16:29:36.983492 master-0 kubenswrapper[7604]: I0309 16:29:36.983415 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 09 16:29:36.983844 master-0 kubenswrapper[7604]: I0309 16:29:36.983809 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 09 16:29:37.016656 master-0 kubenswrapper[7604]: I0309 16:29:37.016574 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v"] Mar 09 16:29:37.119372 master-0 kubenswrapper[7604]: I0309 16:29:37.119241 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.119806 master-0 kubenswrapper[7604]: I0309 16:29:37.119772 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.119958 master-0 kubenswrapper[7604]: I0309 16:29:37.119935 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.120137 master-0 kubenswrapper[7604]: I0309 16:29:37.120117 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvfgw\" (UniqueName: \"kubernetes.io/projected/e5c4ccb0-f795-44bd-9bb4-baf84564c239-kube-api-access-cvfgw\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.222723 master-0 kubenswrapper[7604]: I0309 16:29:37.221687 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.222723 master-0 kubenswrapper[7604]: I0309 16:29:37.221797 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.222723 master-0 kubenswrapper[7604]: I0309 16:29:37.221825 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.222723 master-0 kubenswrapper[7604]: I0309 16:29:37.221863 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfgw\" (UniqueName: \"kubernetes.io/projected/e5c4ccb0-f795-44bd-9bb4-baf84564c239-kube-api-access-cvfgw\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.224107 master-0 kubenswrapper[7604]: I0309 16:29:37.224090 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.226896 master-0 kubenswrapper[7604]: I0309 16:29:37.226834 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.227128 master-0 kubenswrapper[7604]: I0309 16:29:37.227054 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.244683 master-0 kubenswrapper[7604]: I0309 16:29:37.244583 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfgw\" (UniqueName: \"kubernetes.io/projected/e5c4ccb0-f795-44bd-9bb4-baf84564c239-kube-api-access-cvfgw\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.298031 master-0 kubenswrapper[7604]: I0309 16:29:37.297961 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:29:37.713814 master-0 kubenswrapper[7604]: I0309 16:29:37.713740 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v"] Mar 09 16:29:37.720609 master-0 kubenswrapper[7604]: W0309 16:29:37.720548 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5c4ccb0_f795_44bd_9bb4_baf84564c239.slice/crio-61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559 WatchSource:0}: Error finding container 61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559: Status 404 returned error can't find the container with id 61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559 Mar 09 16:29:37.727719 master-0 kubenswrapper[7604]: I0309 16:29:37.727662 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:37.727719 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:37.727719 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:37.727719 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:37.728242 master-0 kubenswrapper[7604]: I0309 16:29:37.727732 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:38.697247 master-0 kubenswrapper[7604]: I0309 16:29:38.697165 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" event={"ID":"e5c4ccb0-f795-44bd-9bb4-baf84564c239","Type":"ContainerStarted","Data":"61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559"} Mar 09 16:29:38.729027 master-0 kubenswrapper[7604]: I0309 16:29:38.728894 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:38.729027 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:38.729027 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:38.729027 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:38.729345 master-0 kubenswrapper[7604]: I0309 16:29:38.729047 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:39.704577 master-0 kubenswrapper[7604]: I0309 16:29:39.704519 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" event={"ID":"e5c4ccb0-f795-44bd-9bb4-baf84564c239","Type":"ContainerStarted","Data":"16249349e7b5419eca1a92a2b9d9dfb4a123bc35d00d0df0e3a5767342386214"} Mar 09 16:29:39.733299 master-0 kubenswrapper[7604]: I0309 16:29:39.733240 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:39.733299 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:39.733299 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:39.733299 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:39.733711 master-0 kubenswrapper[7604]: I0309 16:29:39.733316 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:40.710109 master-0 kubenswrapper[7604]: I0309 16:29:40.710056 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" event={"ID":"e5c4ccb0-f795-44bd-9bb4-baf84564c239","Type":"ContainerStarted","Data":"86288479a739a09a67e1c06a02931a00679de1626faf4c3079027daeb193f43d"} Mar 09 16:29:40.730720 master-0 kubenswrapper[7604]: I0309 16:29:40.730642 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:40.730720 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:40.730720 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:40.730720 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:40.730720 master-0 kubenswrapper[7604]: I0309 16:29:40.730718 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:40.738721 master-0 kubenswrapper[7604]: I0309 16:29:40.738499 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" podStartSLOduration=2.94117069 podStartE2EDuration="4.738481341s" podCreationTimestamp="2026-03-09 16:29:36 +0000 UTC" firstStartedPulling="2026-03-09 16:29:37.723066851 +0000 UTC m=+234.777036274" lastFinishedPulling="2026-03-09 16:29:39.520377502 +0000 UTC m=+236.574346925" observedRunningTime="2026-03-09 16:29:40.736626017 +0000 UTC m=+237.790595450" watchObservedRunningTime="2026-03-09 16:29:40.738481341 +0000 UTC m=+237.792450784" Mar 09 16:29:41.727945 master-0 kubenswrapper[7604]: I0309 16:29:41.727864 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:41.727945 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:41.727945 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:41.727945 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:41.727945 master-0 kubenswrapper[7604]: I0309 16:29:41.727939 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:42.665473 master-0 kubenswrapper[7604]: I0309 16:29:42.665403 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n"] Mar 09 16:29:42.666683 master-0 kubenswrapper[7604]: I0309 16:29:42.666660 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.669591 master-0 kubenswrapper[7604]: I0309 16:29:42.669552 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 09 16:29:42.670579 master-0 kubenswrapper[7604]: I0309 16:29:42.670545 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vp2pt" Mar 09 16:29:42.673020 master-0 kubenswrapper[7604]: I0309 16:29:42.672929 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 09 16:29:42.690295 master-0 kubenswrapper[7604]: I0309 16:29:42.690241 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n"] Mar 09 16:29:42.698599 master-0 kubenswrapper[7604]: I0309 16:29:42.698539 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv"] Mar 09 16:29:42.700239 master-0 kubenswrapper[7604]: I0309 16:29:42.700205 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.702119 master-0 kubenswrapper[7604]: I0309 16:29:42.702070 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 09 16:29:42.702385 master-0 kubenswrapper[7604]: I0309 16:29:42.702348 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-h7zpd" Mar 09 16:29:42.702736 master-0 kubenswrapper[7604]: I0309 16:29:42.702661 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-qjk4k"] Mar 09 16:29:42.704306 master-0 kubenswrapper[7604]: I0309 16:29:42.704277 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.711055 master-0 kubenswrapper[7604]: I0309 16:29:42.710438 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 09 16:29:42.711055 master-0 kubenswrapper[7604]: I0309 16:29:42.710625 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 09 16:29:42.712540 master-0 kubenswrapper[7604]: I0309 16:29:42.712488 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv"] Mar 09 16:29:42.718142 master-0 kubenswrapper[7604]: I0309 16:29:42.718062 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 09 16:29:42.718493 master-0 kubenswrapper[7604]: I0309 16:29:42.718412 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 09 16:29:42.720826 master-0 kubenswrapper[7604]: I0309 16:29:42.720604 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9j6gd" Mar 09 16:29:42.733040 master-0 kubenswrapper[7604]: I0309 16:29:42.732941 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:42.733040 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:42.733040 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:42.733040 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:42.733684 master-0 kubenswrapper[7604]: I0309 16:29:42.733098 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813032 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-textfile\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813079 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8rh\" (UniqueName: \"kubernetes.io/projected/92bd7735-8e3c-43bb-b543-03e6e6c5142a-kube-api-access-dv8rh\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813146 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813177 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813206 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ebbec674-ac49-422a-9548-5c29b15ad44d-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813223 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813245 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-sys\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813261 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813279 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-wtmp\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813302 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhct\" (UniqueName: \"kubernetes.io/projected/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-api-access-jrhct\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813324 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.813309 master-0 kubenswrapper[7604]: I0309 16:29:42.813342 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.814375 master-0 kubenswrapper[7604]: I0309 16:29:42.813435 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-root\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.814375 master-0 kubenswrapper[7604]: I0309 16:29:42.813553 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.814375 master-0 kubenswrapper[7604]: I0309 16:29:42.813572 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.814375 master-0 kubenswrapper[7604]: I0309 16:29:42.813593 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.814375 master-0 kubenswrapper[7604]: I0309 16:29:42.813608 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.814375 master-0 kubenswrapper[7604]: I0309 16:29:42.813632 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnstc\" (UniqueName: \"kubernetes.io/projected/b9fc9e7d-652c-4063-9cdb-358f58cae29a-kube-api-access-xnstc\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.915544 master-0 kubenswrapper[7604]: I0309 16:29:42.915372 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.915544 master-0 kubenswrapper[7604]: I0309 16:29:42.915472 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ebbec674-ac49-422a-9548-5c29b15ad44d-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.915544 master-0 kubenswrapper[7604]: I0309 16:29:42.915514 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915602 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-sys\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915635 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915662 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-wtmp\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915692 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhct\" (UniqueName: \"kubernetes.io/projected/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-api-access-jrhct\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915725 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915769 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-root\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.915820 master-0 kubenswrapper[7604]: I0309 16:29:42.915808 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.915858 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.915892 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.915920 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.915945 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.915978 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnstc\" (UniqueName: \"kubernetes.io/projected/b9fc9e7d-652c-4063-9cdb-358f58cae29a-kube-api-access-xnstc\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.916018 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-textfile\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.916053 master-0 kubenswrapper[7604]: I0309 16:29:42.916041 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8rh\" (UniqueName: \"kubernetes.io/projected/92bd7735-8e3c-43bb-b543-03e6e6c5142a-kube-api-access-dv8rh\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.916369 master-0 kubenswrapper[7604]: I0309 16:29:42.916079 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.916369 master-0 kubenswrapper[7604]: E0309 16:29:42.916291 7604 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 09 16:29:42.916369 master-0 kubenswrapper[7604]: E0309 16:29:42.916371 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:29:43.416349023 +0000 UTC m=+240.470318446 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : secret "node-exporter-tls" not found Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.920934 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-sys\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.921096 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-textfile\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.921504 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-wtmp\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.921546 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-root\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.921689 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.921861 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.922690 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.924385 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ebbec674-ac49-422a-9548-5c29b15ad44d-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.924577 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.926267 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.927085 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.927627 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.928140 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.928476 master-0 kubenswrapper[7604]: I0309 16:29:42.928166 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.944053 master-0 kubenswrapper[7604]: I0309 16:29:42.942602 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhct\" (UniqueName: \"kubernetes.io/projected/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-api-access-jrhct\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:42.945776 master-0 kubenswrapper[7604]: I0309 16:29:42.945627 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8rh\" (UniqueName: \"kubernetes.io/projected/92bd7735-8e3c-43bb-b543-03e6e6c5142a-kube-api-access-dv8rh\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:42.949703 master-0 kubenswrapper[7604]: I0309 16:29:42.948901 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnstc\" (UniqueName: \"kubernetes.io/projected/b9fc9e7d-652c-4063-9cdb-358f58cae29a-kube-api-access-xnstc\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:42.981319 master-0 kubenswrapper[7604]: I0309 16:29:42.981190 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:29:43.052698 master-0 kubenswrapper[7604]: I0309 16:29:43.052608 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:29:43.425161 master-0 kubenswrapper[7604]: I0309 16:29:43.424963 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:43.428121 master-0 kubenswrapper[7604]: I0309 16:29:43.428089 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:43.535528 master-0 kubenswrapper[7604]: I0309 16:29:43.532933 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n"] Mar 09 16:29:43.569460 master-0 kubenswrapper[7604]: I0309 16:29:43.569378 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv"] Mar 09 16:29:43.586097 master-0 kubenswrapper[7604]: W0309 16:29:43.586044 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbec674_ac49_422a_9548_5c29b15ad44d.slice/crio-33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6 WatchSource:0}: Error finding container 33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6: Status 404 returned error can't find the container with id 33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6 Mar 09 16:29:43.671105 master-0 kubenswrapper[7604]: I0309 16:29:43.671036 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9j6gd" Mar 09 16:29:43.685086 master-0 kubenswrapper[7604]: I0309 16:29:43.681717 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:29:43.728807 master-0 kubenswrapper[7604]: I0309 16:29:43.728766 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:43.728807 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:43.728807 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:43.728807 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:43.729013 master-0 kubenswrapper[7604]: I0309 16:29:43.728825 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:43.731071 master-0 kubenswrapper[7604]: I0309 16:29:43.730185 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" event={"ID":"ebbec674-ac49-422a-9548-5c29b15ad44d","Type":"ContainerStarted","Data":"33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6"} Mar 09 16:29:43.732528 master-0 kubenswrapper[7604]: I0309 16:29:43.732458 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" event={"ID":"92bd7735-8e3c-43bb-b543-03e6e6c5142a","Type":"ContainerStarted","Data":"ffb7db71fa52967bb45e5b9d2b58bddcbf5e18e10773a42962dd31c324190cd9"} Mar 09 16:29:43.732601 master-0 kubenswrapper[7604]: I0309 16:29:43.732524 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" event={"ID":"92bd7735-8e3c-43bb-b543-03e6e6c5142a","Type":"ContainerStarted","Data":"fb4bd8cef53e72d659379e281e583b2e2ff3d1ae2b420acbf269067cfbc2882a"} Mar 09 16:29:44.729027 master-0 kubenswrapper[7604]: I0309 16:29:44.728969 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:44.729027 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:44.729027 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:44.729027 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:44.729846 master-0 kubenswrapper[7604]: I0309 16:29:44.729049 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:44.742508 master-0 kubenswrapper[7604]: I0309 16:29:44.742410 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" event={"ID":"92bd7735-8e3c-43bb-b543-03e6e6c5142a","Type":"ContainerStarted","Data":"51098eeddbddc7dbb6490b45698c9d5460af6d8a7cc5ac1b628d8531ddf064ab"} Mar 09 16:29:44.746234 master-0 kubenswrapper[7604]: I0309 16:29:44.746186 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qjk4k" event={"ID":"b9fc9e7d-652c-4063-9cdb-358f58cae29a","Type":"ContainerStarted","Data":"57aaf330726fe627a8a61909fad0b332f97b99d8101a20fb9a743ae449fbfca5"} Mar 09 16:29:45.728834 master-0 kubenswrapper[7604]: I0309 16:29:45.728763 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:45.728834 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:45.728834 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:45.728834 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:45.729372 master-0 kubenswrapper[7604]: I0309 16:29:45.728858 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:46.727979 master-0 kubenswrapper[7604]: I0309 16:29:46.727917 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:46.727979 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:46.727979 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:46.727979 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:46.728240 master-0 kubenswrapper[7604]: I0309 16:29:46.727983 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:46.763721 master-0 kubenswrapper[7604]: I0309 16:29:46.763568 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" event={"ID":"92bd7735-8e3c-43bb-b543-03e6e6c5142a","Type":"ContainerStarted","Data":"b14385f8d061fa2e1bb169199dbbc1a0b27c597ea40b9c92a4f36e760b8c1dcd"} Mar 09 16:29:46.765767 master-0 kubenswrapper[7604]: I0309 16:29:46.765705 7604 generic.go:334] "Generic (PLEG): container finished" podID="b9fc9e7d-652c-4063-9cdb-358f58cae29a" containerID="9f1f79ee7ed70eccdd62d45b2f4106c0429d123cf8b355c716fdcb468ee74764" exitCode=0 Mar 09 16:29:46.765892 master-0 kubenswrapper[7604]: I0309 16:29:46.765793 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qjk4k" event={"ID":"b9fc9e7d-652c-4063-9cdb-358f58cae29a","Type":"ContainerDied","Data":"9f1f79ee7ed70eccdd62d45b2f4106c0429d123cf8b355c716fdcb468ee74764"} Mar 09 16:29:46.768507 master-0 kubenswrapper[7604]: I0309 16:29:46.768461 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" event={"ID":"ebbec674-ac49-422a-9548-5c29b15ad44d","Type":"ContainerStarted","Data":"c446a7b126157eca37c8d6fcc018d8d34265dfaff72300f572a51f4bf4a2f9e3"} Mar 09 16:29:46.768507 master-0 kubenswrapper[7604]: I0309 16:29:46.768505 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" event={"ID":"ebbec674-ac49-422a-9548-5c29b15ad44d","Type":"ContainerStarted","Data":"59736e61395a8f6a2e3a495610cdecdc313fb3d8eded773dcadab4b593af1b7f"} Mar 09 16:29:46.768687 master-0 kubenswrapper[7604]: I0309 16:29:46.768517 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" event={"ID":"ebbec674-ac49-422a-9548-5c29b15ad44d","Type":"ContainerStarted","Data":"e6093fd3d5965b14bb0a366a6a047cf02a70f7ff77b42e32f443d457b2654c16"} Mar 09 16:29:46.956190 master-0 kubenswrapper[7604]: I0309 16:29:46.953990 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" podStartSLOduration=2.953536211 podStartE2EDuration="4.953970539s" podCreationTimestamp="2026-03-09 16:29:42 +0000 UTC" firstStartedPulling="2026-03-09 16:29:43.804232026 +0000 UTC m=+240.858201449" lastFinishedPulling="2026-03-09 16:29:45.804666354 +0000 UTC m=+242.858635777" observedRunningTime="2026-03-09 16:29:46.951275052 +0000 UTC m=+244.005244495" watchObservedRunningTime="2026-03-09 16:29:46.953970539 +0000 UTC m=+244.007939962" Mar 09 16:29:47.024978 master-0 kubenswrapper[7604]: I0309 16:29:47.024902 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" podStartSLOduration=2.8133592480000003 podStartE2EDuration="5.024873352s" podCreationTimestamp="2026-03-09 16:29:42 +0000 UTC" firstStartedPulling="2026-03-09 16:29:43.588508548 +0000 UTC m=+240.642477971" lastFinishedPulling="2026-03-09 16:29:45.800022652 +0000 UTC m=+242.853992075" observedRunningTime="2026-03-09 16:29:47.019613622 +0000 UTC m=+244.073583055" watchObservedRunningTime="2026-03-09 16:29:47.024873352 +0000 UTC m=+244.078842775" Mar 09 16:29:47.727277 master-0 kubenswrapper[7604]: I0309 16:29:47.727226 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:47.727277 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:47.727277 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:47.727277 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:47.727669 master-0 kubenswrapper[7604]: I0309 16:29:47.727304 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:47.776109 master-0 kubenswrapper[7604]: I0309 16:29:47.776056 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qjk4k" event={"ID":"b9fc9e7d-652c-4063-9cdb-358f58cae29a","Type":"ContainerStarted","Data":"623f050e8c2c055ccfc99dfe01537714b78e6c6b48ae4b5527580490f57031e8"} Mar 09 16:29:47.776878 master-0 kubenswrapper[7604]: I0309 16:29:47.776123 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qjk4k" event={"ID":"b9fc9e7d-652c-4063-9cdb-358f58cae29a","Type":"ContainerStarted","Data":"c4bddbbc41c5246aeb42f769a47decfa07036c03823b2865405cdbfdcab748df"} Mar 09 16:29:47.795520 master-0 kubenswrapper[7604]: I0309 16:29:47.795439 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-qjk4k" podStartSLOduration=3.7247789239999998 podStartE2EDuration="5.795397175s" podCreationTimestamp="2026-03-09 16:29:42 +0000 UTC" firstStartedPulling="2026-03-09 16:29:43.726685693 +0000 UTC m=+240.780655116" lastFinishedPulling="2026-03-09 16:29:45.797303944 +0000 UTC m=+242.851273367" observedRunningTime="2026-03-09 16:29:47.791929667 +0000 UTC m=+244.845899110" watchObservedRunningTime="2026-03-09 16:29:47.795397175 +0000 UTC m=+244.849366598" Mar 09 16:29:48.000505 master-0 kubenswrapper[7604]: I0309 16:29:48.000355 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7c4558858-9rclt"] Mar 09 16:29:48.001310 master-0 kubenswrapper[7604]: I0309 16:29:48.001279 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.004465 master-0 kubenswrapper[7604]: I0309 16:29:48.004414 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 09 16:29:48.004577 master-0 kubenswrapper[7604]: I0309 16:29:48.004564 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5k05m0jd20f8o" Mar 09 16:29:48.004626 master-0 kubenswrapper[7604]: I0309 16:29:48.004593 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 09 16:29:48.004626 master-0 kubenswrapper[7604]: I0309 16:29:48.004616 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 09 16:29:48.004808 master-0 kubenswrapper[7604]: I0309 16:29:48.004770 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 09 16:29:48.006860 master-0 kubenswrapper[7604]: I0309 16:29:48.006840 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-ns927" Mar 09 16:29:48.009124 master-0 kubenswrapper[7604]: I0309 16:29:48.009089 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7c4558858-9rclt"] Mar 09 16:29:48.112949 master-0 kubenswrapper[7604]: I0309 16:29:48.112898 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.112949 master-0 kubenswrapper[7604]: I0309 16:29:48.112953 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.113212 master-0 kubenswrapper[7604]: I0309 16:29:48.112979 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.113212 master-0 kubenswrapper[7604]: I0309 16:29:48.113050 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.113212 master-0 kubenswrapper[7604]: I0309 16:29:48.113083 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.113212 master-0 kubenswrapper[7604]: I0309 16:29:48.113121 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.113212 master-0 kubenswrapper[7604]: I0309 16:29:48.113136 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.214383 master-0 kubenswrapper[7604]: I0309 16:29:48.214342 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.214655 master-0 kubenswrapper[7604]: I0309 16:29:48.214640 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.214847 master-0 kubenswrapper[7604]: I0309 16:29:48.214833 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.214952 master-0 kubenswrapper[7604]: I0309 16:29:48.214940 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.215046 master-0 kubenswrapper[7604]: I0309 16:29:48.215034 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.215192 master-0 kubenswrapper[7604]: I0309 16:29:48.215178 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.215265 master-0 kubenswrapper[7604]: I0309 16:29:48.215253 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.215845 master-0 kubenswrapper[7604]: I0309 16:29:48.215806 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.215845 master-0 kubenswrapper[7604]: I0309 16:29:48.215816 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.216206 master-0 kubenswrapper[7604]: I0309 16:29:48.216174 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.219336 master-0 kubenswrapper[7604]: I0309 16:29:48.219300 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.220105 master-0 kubenswrapper[7604]: I0309 16:29:48.220086 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.220254 master-0 kubenswrapper[7604]: I0309 16:29:48.220188 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.236445 master-0 kubenswrapper[7604]: I0309 16:29:48.236386 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.329807 master-0 kubenswrapper[7604]: I0309 16:29:48.329685 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:29:48.728481 master-0 kubenswrapper[7604]: I0309 16:29:48.728063 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:48.728481 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:48.728481 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:48.728481 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:48.728481 master-0 kubenswrapper[7604]: I0309 16:29:48.728133 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:48.796125 master-0 kubenswrapper[7604]: I0309 16:29:48.796070 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7c4558858-9rclt"] Mar 09 16:29:48.802539 master-0 kubenswrapper[7604]: W0309 16:29:48.802480 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf3a18d_eccb_4c92_bc2f_f3b85d2c219b.slice/crio-7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20 WatchSource:0}: Error finding container 7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20: Status 404 returned error can't find the container with id 7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20 Mar 09 16:29:49.730444 master-0 kubenswrapper[7604]: I0309 16:29:49.730369 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:49.730444 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:49.730444 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:49.730444 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:49.730796 master-0 kubenswrapper[7604]: I0309 16:29:49.730454 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:49.787662 master-0 kubenswrapper[7604]: I0309 16:29:49.787347 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" event={"ID":"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b","Type":"ContainerStarted","Data":"7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20"} Mar 09 16:29:50.728359 master-0 kubenswrapper[7604]: I0309 16:29:50.728289 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:50.728359 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:50.728359 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:50.728359 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:50.728359 master-0 kubenswrapper[7604]: I0309 16:29:50.728358 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:50.794803 master-0 kubenswrapper[7604]: I0309 16:29:50.794702 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" event={"ID":"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b","Type":"ContainerStarted","Data":"7c7e82a8000eb584fb2d9fc14766cd7c65340bfb72b0d9d1871812e5a7249542"} Mar 09 16:29:50.832807 master-0 kubenswrapper[7604]: I0309 16:29:50.832722 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" podStartSLOduration=2.2346448150000002 podStartE2EDuration="3.832706559s" podCreationTimestamp="2026-03-09 16:29:47 +0000 UTC" firstStartedPulling="2026-03-09 16:29:48.804680373 +0000 UTC m=+245.858649796" lastFinishedPulling="2026-03-09 16:29:50.402742117 +0000 UTC m=+247.456711540" observedRunningTime="2026-03-09 16:29:50.83097447 +0000 UTC m=+247.884943893" watchObservedRunningTime="2026-03-09 16:29:50.832706559 +0000 UTC m=+247.886675982" Mar 09 16:29:51.728076 master-0 kubenswrapper[7604]: I0309 16:29:51.728022 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:51.728076 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:51.728076 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:51.728076 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:51.728401 master-0 kubenswrapper[7604]: I0309 16:29:51.728089 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:52.727464 master-0 kubenswrapper[7604]: I0309 16:29:52.727394 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:52.727464 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:52.727464 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:52.727464 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:52.727749 master-0 kubenswrapper[7604]: I0309 16:29:52.727486 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:53.728169 master-0 kubenswrapper[7604]: I0309 16:29:53.728009 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:53.728169 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:53.728169 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:53.728169 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:53.728169 master-0 kubenswrapper[7604]: I0309 16:29:53.728108 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:54.729300 master-0 kubenswrapper[7604]: I0309 16:29:54.729194 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:54.729300 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:54.729300 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:54.729300 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:54.730143 master-0 kubenswrapper[7604]: I0309 16:29:54.729320 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:55.727671 master-0 kubenswrapper[7604]: I0309 16:29:55.727560 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:55.727671 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:55.727671 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:55.727671 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:55.728333 master-0 kubenswrapper[7604]: I0309 16:29:55.727687 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:56.728163 master-0 kubenswrapper[7604]: I0309 16:29:56.728090 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:56.728163 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:56.728163 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:56.728163 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:56.728846 master-0 kubenswrapper[7604]: I0309 16:29:56.728182 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:57.727660 master-0 kubenswrapper[7604]: I0309 16:29:57.727594 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:57.727660 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:57.727660 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:57.727660 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:57.727660 master-0 kubenswrapper[7604]: I0309 16:29:57.727657 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:58.727911 master-0 kubenswrapper[7604]: I0309 16:29:58.727855 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:58.727911 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:58.727911 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:58.727911 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:58.728630 master-0 kubenswrapper[7604]: I0309 16:29:58.727932 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:29:59.728650 master-0 kubenswrapper[7604]: I0309 16:29:59.728579 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:29:59.728650 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:29:59.728650 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:29:59.728650 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:29:59.729333 master-0 kubenswrapper[7604]: I0309 16:29:59.728693 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:00.727447 master-0 kubenswrapper[7604]: I0309 16:30:00.727317 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:00.727447 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:00.727447 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:00.727447 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:00.727447 master-0 kubenswrapper[7604]: I0309 16:30:00.727403 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:01.727388 master-0 kubenswrapper[7604]: I0309 16:30:01.727315 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:01.727388 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:01.727388 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:01.727388 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:01.727997 master-0 kubenswrapper[7604]: I0309 16:30:01.727390 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:02.727364 master-0 kubenswrapper[7604]: I0309 16:30:02.727311 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:02.727364 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:02.727364 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:02.727364 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:02.727915 master-0 kubenswrapper[7604]: I0309 16:30:02.727370 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:03.729185 master-0 kubenswrapper[7604]: I0309 16:30:03.729114 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:03.729185 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:03.729185 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:03.729185 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:03.729762 master-0 kubenswrapper[7604]: I0309 16:30:03.729205 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:04.727505 master-0 kubenswrapper[7604]: I0309 16:30:04.727409 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:04.727505 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:04.727505 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:04.727505 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:04.727910 master-0 kubenswrapper[7604]: I0309 16:30:04.727494 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:05.728995 master-0 kubenswrapper[7604]: I0309 16:30:05.728913 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:05.728995 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:05.728995 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:05.728995 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:05.729667 master-0 kubenswrapper[7604]: I0309 16:30:05.729059 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:06.728754 master-0 kubenswrapper[7604]: I0309 16:30:06.728701 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:06.728754 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:06.728754 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:06.728754 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:06.729397 master-0 kubenswrapper[7604]: I0309 16:30:06.728767 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:07.727765 master-0 kubenswrapper[7604]: I0309 16:30:07.727695 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:07.727765 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:07.727765 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:07.727765 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:07.728098 master-0 kubenswrapper[7604]: I0309 16:30:07.727797 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:08.331030 master-0 kubenswrapper[7604]: I0309 16:30:08.330954 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:30:08.331030 master-0 kubenswrapper[7604]: I0309 16:30:08.331034 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:30:08.728076 master-0 kubenswrapper[7604]: I0309 16:30:08.728023 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:08.728076 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:08.728076 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:08.728076 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:08.728406 master-0 kubenswrapper[7604]: I0309 16:30:08.728096 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:09.730090 master-0 kubenswrapper[7604]: I0309 16:30:09.730030 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:09.730090 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:09.730090 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:09.730090 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:09.730716 master-0 kubenswrapper[7604]: I0309 16:30:09.730106 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:10.727104 master-0 kubenswrapper[7604]: I0309 16:30:10.727046 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:10.727104 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:10.727104 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:10.727104 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:10.727383 master-0 kubenswrapper[7604]: I0309 16:30:10.727118 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:11.728402 master-0 kubenswrapper[7604]: I0309 16:30:11.728288 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:11.728402 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:11.728402 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:11.728402 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:11.728402 master-0 kubenswrapper[7604]: I0309 16:30:11.728411 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:12.727603 master-0 kubenswrapper[7604]: I0309 16:30:12.727510 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:12.727603 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:12.727603 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:12.727603 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:12.727980 master-0 kubenswrapper[7604]: I0309 16:30:12.727605 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:13.727514 master-0 kubenswrapper[7604]: I0309 16:30:13.727455 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:13.727514 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:13.727514 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:13.727514 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:13.728088 master-0 kubenswrapper[7604]: I0309 16:30:13.727540 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:14.727126 master-0 kubenswrapper[7604]: I0309 16:30:14.727048 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:14.727126 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:14.727126 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:14.727126 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:14.727503 master-0 kubenswrapper[7604]: I0309 16:30:14.727141 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:15.728608 master-0 kubenswrapper[7604]: I0309 16:30:15.728543 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:15.728608 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:15.728608 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:15.728608 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:15.729173 master-0 kubenswrapper[7604]: I0309 16:30:15.728644 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:16.728572 master-0 kubenswrapper[7604]: I0309 16:30:16.728516 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:16.728572 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:16.728572 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:16.728572 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:16.729151 master-0 kubenswrapper[7604]: I0309 16:30:16.728593 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:17.728030 master-0 kubenswrapper[7604]: I0309 16:30:17.727926 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:17.728030 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:17.728030 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:17.728030 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:17.728539 master-0 kubenswrapper[7604]: I0309 16:30:17.728048 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:18.731007 master-0 kubenswrapper[7604]: I0309 16:30:18.730900 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:18.731007 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:18.731007 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:18.731007 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:18.731007 master-0 kubenswrapper[7604]: I0309 16:30:18.730988 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:19.728064 master-0 kubenswrapper[7604]: I0309 16:30:19.727999 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:19.728064 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:19.728064 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:19.728064 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:19.728064 master-0 kubenswrapper[7604]: I0309 16:30:19.728075 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:20.729266 master-0 kubenswrapper[7604]: I0309 16:30:20.728996 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:20.729266 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:20.729266 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:20.729266 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:20.730109 master-0 kubenswrapper[7604]: I0309 16:30:20.729122 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:21.729225 master-0 kubenswrapper[7604]: I0309 16:30:21.729113 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:21.729225 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:21.729225 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:21.729225 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:21.730753 master-0 kubenswrapper[7604]: I0309 16:30:21.730673 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:22.728292 master-0 kubenswrapper[7604]: I0309 16:30:22.728197 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:22.728292 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:22.728292 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:22.728292 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:22.728292 master-0 kubenswrapper[7604]: I0309 16:30:22.728271 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:23.729262 master-0 kubenswrapper[7604]: I0309 16:30:23.729175 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:23.729262 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:23.729262 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:23.729262 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:23.730165 master-0 kubenswrapper[7604]: I0309 16:30:23.729338 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:24.728726 master-0 kubenswrapper[7604]: I0309 16:30:24.728646 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:24.728726 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:24.728726 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:24.728726 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:24.729078 master-0 kubenswrapper[7604]: I0309 16:30:24.728743 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:25.729275 master-0 kubenswrapper[7604]: I0309 16:30:25.729149 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:25.729275 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:25.729275 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:25.729275 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:25.730082 master-0 kubenswrapper[7604]: I0309 16:30:25.729286 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:26.020995 master-0 kubenswrapper[7604]: I0309 16:30:26.020865 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/0.log" Mar 09 16:30:26.020995 master-0 kubenswrapper[7604]: I0309 16:30:26.020961 7604 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="f908a12ac71e2212454263bad6748c946abbe3337853638f948a9c8e648cf7ad" exitCode=1 Mar 09 16:30:26.021214 master-0 kubenswrapper[7604]: I0309 16:30:26.021045 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerDied","Data":"f908a12ac71e2212454263bad6748c946abbe3337853638f948a9c8e648cf7ad"} Mar 09 16:30:26.022229 master-0 kubenswrapper[7604]: I0309 16:30:26.022160 7604 scope.go:117] "RemoveContainer" containerID="f908a12ac71e2212454263bad6748c946abbe3337853638f948a9c8e648cf7ad" Mar 09 16:30:26.729700 master-0 kubenswrapper[7604]: I0309 16:30:26.729612 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:26.729700 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:26.729700 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:26.729700 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:26.730647 master-0 kubenswrapper[7604]: I0309 16:30:26.729725 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:27.037591 master-0 kubenswrapper[7604]: I0309 16:30:27.037453 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/0.log" Mar 09 16:30:27.037591 master-0 kubenswrapper[7604]: I0309 16:30:27.037538 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"f8200a1495a7e1c37d6537537ac72284b2e4af062cfb0a0dbced10da1379a3d0"} Mar 09 16:30:27.728609 master-0 kubenswrapper[7604]: I0309 16:30:27.728507 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:27.728609 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:27.728609 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:27.728609 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:27.729054 master-0 kubenswrapper[7604]: I0309 16:30:27.728635 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:28.342326 master-0 kubenswrapper[7604]: I0309 16:30:28.342195 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:30:28.348765 master-0 kubenswrapper[7604]: I0309 16:30:28.348703 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:30:28.729830 master-0 kubenswrapper[7604]: I0309 16:30:28.729750 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:28.729830 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:28.729830 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:28.729830 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:28.730279 master-0 kubenswrapper[7604]: I0309 16:30:28.729864 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:29.727470 master-0 kubenswrapper[7604]: I0309 16:30:29.727372 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:29.727470 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:29.727470 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:29.727470 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:29.727470 master-0 kubenswrapper[7604]: I0309 16:30:29.727457 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:30.728765 master-0 kubenswrapper[7604]: I0309 16:30:30.728654 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:30.728765 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:30.728765 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:30.728765 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:30.728765 master-0 kubenswrapper[7604]: I0309 16:30:30.728762 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:31.727356 master-0 kubenswrapper[7604]: I0309 16:30:31.727273 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:31.727356 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:31.727356 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:31.727356 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:31.727356 master-0 kubenswrapper[7604]: I0309 16:30:31.727345 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:32.731410 master-0 kubenswrapper[7604]: I0309 16:30:32.731331 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:32.731410 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:32.731410 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:32.731410 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:32.732090 master-0 kubenswrapper[7604]: I0309 16:30:32.731432 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:33.728932 master-0 kubenswrapper[7604]: I0309 16:30:33.728843 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:33.728932 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:33.728932 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:33.728932 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:33.729281 master-0 kubenswrapper[7604]: I0309 16:30:33.728942 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:34.727669 master-0 kubenswrapper[7604]: I0309 16:30:34.727572 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:34.727669 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:34.727669 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:34.727669 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:34.727669 master-0 kubenswrapper[7604]: I0309 16:30:34.727634 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:35.729352 master-0 kubenswrapper[7604]: I0309 16:30:35.728769 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:35.729352 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:35.729352 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:35.729352 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:35.730035 master-0 kubenswrapper[7604]: I0309 16:30:35.729399 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:36.728888 master-0 kubenswrapper[7604]: I0309 16:30:36.728644 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:36.728888 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:36.728888 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:36.728888 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:36.728888 master-0 kubenswrapper[7604]: I0309 16:30:36.728768 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:37.727693 master-0 kubenswrapper[7604]: I0309 16:30:37.727598 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:37.727693 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:37.727693 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:37.727693 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:37.727693 master-0 kubenswrapper[7604]: I0309 16:30:37.727681 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:38.727153 master-0 kubenswrapper[7604]: I0309 16:30:38.727073 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:38.727153 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:38.727153 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:38.727153 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:38.727788 master-0 kubenswrapper[7604]: I0309 16:30:38.727161 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:39.730161 master-0 kubenswrapper[7604]: I0309 16:30:39.729272 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:39.730161 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:39.730161 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:39.730161 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:39.730883 master-0 kubenswrapper[7604]: I0309 16:30:39.730289 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:40.727795 master-0 kubenswrapper[7604]: I0309 16:30:40.727724 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:40.727795 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:40.727795 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:40.727795 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:40.728078 master-0 kubenswrapper[7604]: I0309 16:30:40.727792 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:41.727862 master-0 kubenswrapper[7604]: I0309 16:30:41.727754 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:41.727862 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:41.727862 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:41.727862 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:41.728654 master-0 kubenswrapper[7604]: I0309 16:30:41.727872 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:42.728756 master-0 kubenswrapper[7604]: I0309 16:30:42.728667 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:42.728756 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:42.728756 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:42.728756 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:42.728756 master-0 kubenswrapper[7604]: I0309 16:30:42.728754 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:43.219050 master-0 kubenswrapper[7604]: I0309 16:30:43.219013 7604 scope.go:117] "RemoveContainer" containerID="5f6392f9e974864cb8a576a8cc4e692a56b1538084351cbc64c608b35b4670f8" Mar 09 16:30:43.726871 master-0 kubenswrapper[7604]: I0309 16:30:43.726791 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:43.726871 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:43.726871 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:43.726871 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:43.726871 master-0 kubenswrapper[7604]: I0309 16:30:43.726867 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:44.727866 master-0 kubenswrapper[7604]: I0309 16:30:44.727748 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:44.727866 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:44.727866 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:44.727866 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:44.727866 master-0 kubenswrapper[7604]: I0309 16:30:44.727852 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:45.728214 master-0 kubenswrapper[7604]: I0309 16:30:45.728124 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:45.728214 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:45.728214 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:45.728214 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:45.728214 master-0 kubenswrapper[7604]: I0309 16:30:45.728198 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:46.729148 master-0 kubenswrapper[7604]: I0309 16:30:46.728966 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:46.729148 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:46.729148 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:46.729148 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:46.729148 master-0 kubenswrapper[7604]: I0309 16:30:46.729060 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:47.728704 master-0 kubenswrapper[7604]: I0309 16:30:47.728631 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:47.728704 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:47.728704 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:47.728704 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:47.729331 master-0 kubenswrapper[7604]: I0309 16:30:47.729292 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:48.729034 master-0 kubenswrapper[7604]: I0309 16:30:48.728936 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:48.729034 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:48.729034 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:48.729034 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:48.729356 master-0 kubenswrapper[7604]: I0309 16:30:48.729071 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:49.727483 master-0 kubenswrapper[7604]: I0309 16:30:49.727381 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:49.727483 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:49.727483 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:49.727483 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:49.727937 master-0 kubenswrapper[7604]: I0309 16:30:49.727504 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:50.729566 master-0 kubenswrapper[7604]: I0309 16:30:50.729482 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:50.729566 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:50.729566 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:50.729566 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:50.730549 master-0 kubenswrapper[7604]: I0309 16:30:50.729582 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:51.728416 master-0 kubenswrapper[7604]: I0309 16:30:51.728343 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:51.728416 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:51.728416 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:51.728416 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:51.728416 master-0 kubenswrapper[7604]: I0309 16:30:51.728450 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:52.727265 master-0 kubenswrapper[7604]: I0309 16:30:52.727210 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:52.727265 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:52.727265 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:52.727265 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:52.728048 master-0 kubenswrapper[7604]: I0309 16:30:52.727281 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:53.729130 master-0 kubenswrapper[7604]: I0309 16:30:53.729063 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:53.729130 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:53.729130 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:53.729130 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:53.730184 master-0 kubenswrapper[7604]: I0309 16:30:53.730139 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:54.728878 master-0 kubenswrapper[7604]: I0309 16:30:54.728801 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:54.728878 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:54.728878 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:54.728878 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:54.729668 master-0 kubenswrapper[7604]: I0309 16:30:54.728911 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:55.729361 master-0 kubenswrapper[7604]: I0309 16:30:55.729255 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:55.729361 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:55.729361 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:55.729361 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:55.729361 master-0 kubenswrapper[7604]: I0309 16:30:55.729360 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:56.728882 master-0 kubenswrapper[7604]: I0309 16:30:56.728791 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:56.728882 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:56.728882 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:56.728882 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:56.729389 master-0 kubenswrapper[7604]: I0309 16:30:56.728911 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:57.730214 master-0 kubenswrapper[7604]: I0309 16:30:57.730116 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:57.730214 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:57.730214 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:57.730214 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:57.731105 master-0 kubenswrapper[7604]: I0309 16:30:57.730239 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:58.729252 master-0 kubenswrapper[7604]: I0309 16:30:58.729146 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:58.729252 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:58.729252 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:58.729252 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:58.729717 master-0 kubenswrapper[7604]: I0309 16:30:58.729274 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:30:59.728385 master-0 kubenswrapper[7604]: I0309 16:30:59.728303 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:30:59.728385 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:30:59.728385 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:30:59.728385 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:30:59.728963 master-0 kubenswrapper[7604]: I0309 16:30:59.728410 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:00.728111 master-0 kubenswrapper[7604]: I0309 16:31:00.728005 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:00.728111 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:00.728111 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:00.728111 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:00.728111 master-0 kubenswrapper[7604]: I0309 16:31:00.728103 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:01.728968 master-0 kubenswrapper[7604]: I0309 16:31:01.728847 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:01.728968 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:01.728968 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:01.728968 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:01.729861 master-0 kubenswrapper[7604]: I0309 16:31:01.728986 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:02.728438 master-0 kubenswrapper[7604]: I0309 16:31:02.728316 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:02.728438 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:02.728438 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:02.728438 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:02.728913 master-0 kubenswrapper[7604]: I0309 16:31:02.728445 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:03.729106 master-0 kubenswrapper[7604]: I0309 16:31:03.729019 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:03.729106 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:03.729106 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:03.729106 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:03.730027 master-0 kubenswrapper[7604]: I0309 16:31:03.729122 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:04.728040 master-0 kubenswrapper[7604]: I0309 16:31:04.727943 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:04.728040 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:04.728040 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:04.728040 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:04.728040 master-0 kubenswrapper[7604]: I0309 16:31:04.728037 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:05.728968 master-0 kubenswrapper[7604]: I0309 16:31:05.728829 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:05.728968 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:05.728968 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:05.728968 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:05.728968 master-0 kubenswrapper[7604]: I0309 16:31:05.728913 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:06.728951 master-0 kubenswrapper[7604]: I0309 16:31:06.728864 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:06.728951 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:06.728951 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:06.728951 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:06.729707 master-0 kubenswrapper[7604]: I0309 16:31:06.728957 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:07.728346 master-0 kubenswrapper[7604]: I0309 16:31:07.728254 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:07.728346 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:07.728346 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:07.728346 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:07.728346 master-0 kubenswrapper[7604]: I0309 16:31:07.728342 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:08.729246 master-0 kubenswrapper[7604]: I0309 16:31:08.729129 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:08.729246 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:08.729246 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:08.729246 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:08.729246 master-0 kubenswrapper[7604]: I0309 16:31:08.729247 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:08.803374 master-0 kubenswrapper[7604]: I0309 16:31:08.803266 7604 patch_prober.go:28] interesting pod/machine-config-daemon-94s4v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 16:31:08.803782 master-0 kubenswrapper[7604]: I0309 16:31:08.803377 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" podUID="baf704e3-daf2-4934-a04e-d31df8df0c4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 16:31:09.731512 master-0 kubenswrapper[7604]: I0309 16:31:09.731364 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:09.731512 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:09.731512 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:09.731512 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:09.731512 master-0 kubenswrapper[7604]: I0309 16:31:09.731499 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:10.727652 master-0 kubenswrapper[7604]: I0309 16:31:10.727553 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:10.727652 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:10.727652 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:10.727652 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:10.728124 master-0 kubenswrapper[7604]: I0309 16:31:10.727709 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:11.729315 master-0 kubenswrapper[7604]: I0309 16:31:11.729233 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:11.729315 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:11.729315 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:11.729315 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:11.729315 master-0 kubenswrapper[7604]: I0309 16:31:11.729321 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:12.728472 master-0 kubenswrapper[7604]: I0309 16:31:12.728367 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:12.728472 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:12.728472 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:12.728472 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:12.728952 master-0 kubenswrapper[7604]: I0309 16:31:12.728511 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:13.728682 master-0 kubenswrapper[7604]: I0309 16:31:13.728566 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:13.728682 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:13.728682 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:13.728682 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:13.728682 master-0 kubenswrapper[7604]: I0309 16:31:13.728671 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:14.728310 master-0 kubenswrapper[7604]: I0309 16:31:14.728222 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:14.728310 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:14.728310 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:14.728310 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:14.729368 master-0 kubenswrapper[7604]: I0309 16:31:14.728324 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:15.729775 master-0 kubenswrapper[7604]: I0309 16:31:15.729702 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:15.729775 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:15.729775 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:15.729775 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:15.730531 master-0 kubenswrapper[7604]: I0309 16:31:15.729797 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:16.728088 master-0 kubenswrapper[7604]: I0309 16:31:16.728019 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:16.728088 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:16.728088 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:16.728088 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:16.728494 master-0 kubenswrapper[7604]: I0309 16:31:16.728169 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:17.728108 master-0 kubenswrapper[7604]: I0309 16:31:17.728029 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:17.728108 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:17.728108 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:17.728108 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:17.728872 master-0 kubenswrapper[7604]: I0309 16:31:17.728146 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:18.728299 master-0 kubenswrapper[7604]: I0309 16:31:18.728250 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:18.728299 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:18.728299 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:18.728299 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:18.728949 master-0 kubenswrapper[7604]: I0309 16:31:18.728305 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:19.728169 master-0 kubenswrapper[7604]: I0309 16:31:19.728074 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:19.728169 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:19.728169 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:19.728169 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:19.729037 master-0 kubenswrapper[7604]: I0309 16:31:19.728191 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:20.729755 master-0 kubenswrapper[7604]: I0309 16:31:20.729651 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:20.729755 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:20.729755 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:20.729755 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:20.730762 master-0 kubenswrapper[7604]: I0309 16:31:20.730713 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:21.729695 master-0 kubenswrapper[7604]: I0309 16:31:21.729558 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:21.729695 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:21.729695 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:21.729695 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:21.730672 master-0 kubenswrapper[7604]: I0309 16:31:21.729752 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:22.728748 master-0 kubenswrapper[7604]: I0309 16:31:22.728646 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:22.728748 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:22.728748 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:22.728748 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:22.729145 master-0 kubenswrapper[7604]: I0309 16:31:22.728783 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:23.728170 master-0 kubenswrapper[7604]: I0309 16:31:23.728089 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:23.728170 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:23.728170 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:23.728170 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:23.729001 master-0 kubenswrapper[7604]: I0309 16:31:23.728180 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:24.727833 master-0 kubenswrapper[7604]: I0309 16:31:24.727756 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:24.727833 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:24.727833 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:24.727833 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:24.728865 master-0 kubenswrapper[7604]: I0309 16:31:24.727851 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:25.727531 master-0 kubenswrapper[7604]: I0309 16:31:25.727416 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:25.727531 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:25.727531 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:25.727531 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:25.727937 master-0 kubenswrapper[7604]: I0309 16:31:25.727546 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:26.729061 master-0 kubenswrapper[7604]: I0309 16:31:26.728970 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:26.729061 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:26.729061 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:26.729061 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:26.729951 master-0 kubenswrapper[7604]: I0309 16:31:26.729088 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:27.729795 master-0 kubenswrapper[7604]: I0309 16:31:27.729717 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:31:27.729795 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:31:27.729795 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:31:27.729795 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:31:27.730707 master-0 kubenswrapper[7604]: I0309 16:31:27.730567 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:31:27.730707 master-0 kubenswrapper[7604]: I0309 16:31:27.730677 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:31:27.731714 master-0 kubenswrapper[7604]: I0309 16:31:27.731668 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"93efe9411f2e38cd517ba36a435f06b5ae09ea631b8beedeb3e3a210ec78c7fe"} pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" containerMessage="Container router failed startup probe, will be restarted" Mar 09 16:31:27.731786 master-0 kubenswrapper[7604]: I0309 16:31:27.731727 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" containerID="cri-o://93efe9411f2e38cd517ba36a435f06b5ae09ea631b8beedeb3e3a210ec78c7fe" gracePeriod=3600 Mar 09 16:31:38.803077 master-0 kubenswrapper[7604]: I0309 16:31:38.802978 7604 patch_prober.go:28] interesting pod/machine-config-daemon-94s4v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 09 16:31:38.804037 master-0 kubenswrapper[7604]: I0309 16:31:38.803102 7604 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-94s4v" podUID="baf704e3-daf2-4934-a04e-d31df8df0c4a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 09 16:32:14.828852 master-0 kubenswrapper[7604]: I0309 16:32:14.828742 7604 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="93efe9411f2e38cd517ba36a435f06b5ae09ea631b8beedeb3e3a210ec78c7fe" exitCode=0 Mar 09 16:32:14.828852 master-0 kubenswrapper[7604]: I0309 16:32:14.828824 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerDied","Data":"93efe9411f2e38cd517ba36a435f06b5ae09ea631b8beedeb3e3a210ec78c7fe"} Mar 09 16:32:14.829556 master-0 kubenswrapper[7604]: I0309 16:32:14.828937 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"116e02ef02114f2030248577cde62b42e1c5eea50c09ca56d92d93834a526424"} Mar 09 16:32:15.726593 master-0 kubenswrapper[7604]: I0309 16:32:15.726398 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:32:15.729311 master-0 kubenswrapper[7604]: I0309 16:32:15.729254 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:15.729311 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:15.729311 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:15.729311 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:15.729502 master-0 kubenswrapper[7604]: I0309 16:32:15.729322 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:16.729335 master-0 kubenswrapper[7604]: I0309 16:32:16.729216 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:16.729335 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:16.729335 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:16.729335 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:16.729335 master-0 kubenswrapper[7604]: I0309 16:32:16.729320 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:17.728933 master-0 kubenswrapper[7604]: I0309 16:32:17.728854 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:17.728933 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:17.728933 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:17.728933 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:17.729336 master-0 kubenswrapper[7604]: I0309 16:32:17.728969 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:18.725929 master-0 kubenswrapper[7604]: I0309 16:32:18.725830 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:32:18.728958 master-0 kubenswrapper[7604]: I0309 16:32:18.728897 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:18.728958 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:18.728958 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:18.728958 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:18.729156 master-0 kubenswrapper[7604]: I0309 16:32:18.728974 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:19.727927 master-0 kubenswrapper[7604]: I0309 16:32:19.727852 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:19.727927 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:19.727927 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:19.727927 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:19.728599 master-0 kubenswrapper[7604]: I0309 16:32:19.727943 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:20.730017 master-0 kubenswrapper[7604]: I0309 16:32:20.729911 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:20.730017 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:20.730017 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:20.730017 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:20.731054 master-0 kubenswrapper[7604]: I0309 16:32:20.730686 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:21.729248 master-0 kubenswrapper[7604]: I0309 16:32:21.729141 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:21.729248 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:21.729248 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:21.729248 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:21.729938 master-0 kubenswrapper[7604]: I0309 16:32:21.729269 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:22.728214 master-0 kubenswrapper[7604]: I0309 16:32:22.728089 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:22.728214 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:22.728214 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:22.728214 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:22.729112 master-0 kubenswrapper[7604]: I0309 16:32:22.728248 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:23.728898 master-0 kubenswrapper[7604]: I0309 16:32:23.728831 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:23.728898 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:23.728898 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:23.728898 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:23.729730 master-0 kubenswrapper[7604]: I0309 16:32:23.728939 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:24.729407 master-0 kubenswrapper[7604]: I0309 16:32:24.729312 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:24.729407 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:24.729407 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:24.729407 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:24.729407 master-0 kubenswrapper[7604]: I0309 16:32:24.729403 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:25.728272 master-0 kubenswrapper[7604]: I0309 16:32:25.728185 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:25.728272 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:25.728272 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:25.728272 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:25.728822 master-0 kubenswrapper[7604]: I0309 16:32:25.728282 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:26.728654 master-0 kubenswrapper[7604]: I0309 16:32:26.728569 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:26.728654 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:26.728654 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:26.728654 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:26.729556 master-0 kubenswrapper[7604]: I0309 16:32:26.728705 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:27.728713 master-0 kubenswrapper[7604]: I0309 16:32:27.728644 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:27.728713 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:27.728713 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:27.728713 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:27.729607 master-0 kubenswrapper[7604]: I0309 16:32:27.728750 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:28.730034 master-0 kubenswrapper[7604]: I0309 16:32:28.729926 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:28.730034 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:28.730034 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:28.730034 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:28.730890 master-0 kubenswrapper[7604]: I0309 16:32:28.730058 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:29.543573 master-0 kubenswrapper[7604]: I0309 16:32:29.543468 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nxtms"] Mar 09 16:32:29.544660 master-0 kubenswrapper[7604]: I0309 16:32:29.544626 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:29.547721 master-0 kubenswrapper[7604]: I0309 16:32:29.546929 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-96tct" Mar 09 16:32:29.547721 master-0 kubenswrapper[7604]: I0309 16:32:29.547154 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 09 16:32:29.547721 master-0 kubenswrapper[7604]: I0309 16:32:29.547350 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 09 16:32:29.553977 master-0 kubenswrapper[7604]: I0309 16:32:29.553276 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 09 16:32:29.560291 master-0 kubenswrapper[7604]: I0309 16:32:29.558102 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nxtms"] Mar 09 16:32:29.622052 master-0 kubenswrapper[7604]: I0309 16:32:29.621966 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2bk\" (UniqueName: \"kubernetes.io/projected/18f0164f-0875-4668-b155-df69e05e8ae0-kube-api-access-pq2bk\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:29.622536 master-0 kubenswrapper[7604]: I0309 16:32:29.622149 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:29.723332 master-0 kubenswrapper[7604]: I0309 16:32:29.723256 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:29.723684 master-0 kubenswrapper[7604]: I0309 16:32:29.723352 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2bk\" (UniqueName: \"kubernetes.io/projected/18f0164f-0875-4668-b155-df69e05e8ae0-kube-api-access-pq2bk\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:29.723851 master-0 kubenswrapper[7604]: E0309 16:32:29.723793 7604 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 09 16:32:29.723911 master-0 kubenswrapper[7604]: E0309 16:32:29.723896 7604 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert podName:18f0164f-0875-4668-b155-df69e05e8ae0 nodeName:}" failed. No retries permitted until 2026-03-09 16:32:30.223873603 +0000 UTC m=+407.277843026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert") pod "ingress-canary-nxtms" (UID: "18f0164f-0875-4668-b155-df69e05e8ae0") : secret "canary-serving-cert" not found Mar 09 16:32:29.728251 master-0 kubenswrapper[7604]: I0309 16:32:29.728187 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:29.728251 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:29.728251 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:29.728251 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:29.728589 master-0 kubenswrapper[7604]: I0309 16:32:29.728284 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:29.752130 master-0 kubenswrapper[7604]: I0309 16:32:29.752003 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2bk\" (UniqueName: \"kubernetes.io/projected/18f0164f-0875-4668-b155-df69e05e8ae0-kube-api-access-pq2bk\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:29.941063 master-0 kubenswrapper[7604]: I0309 16:32:29.940826 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/1.log" Mar 09 16:32:29.944685 master-0 kubenswrapper[7604]: I0309 16:32:29.944619 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/0.log" Mar 09 16:32:29.945050 master-0 kubenswrapper[7604]: I0309 16:32:29.944698 7604 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="f8200a1495a7e1c37d6537537ac72284b2e4af062cfb0a0dbced10da1379a3d0" exitCode=1 Mar 09 16:32:29.945050 master-0 kubenswrapper[7604]: I0309 16:32:29.944763 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerDied","Data":"f8200a1495a7e1c37d6537537ac72284b2e4af062cfb0a0dbced10da1379a3d0"} Mar 09 16:32:29.945050 master-0 kubenswrapper[7604]: I0309 16:32:29.944832 7604 scope.go:117] "RemoveContainer" containerID="f908a12ac71e2212454263bad6748c946abbe3337853638f948a9c8e648cf7ad" Mar 09 16:32:29.945728 master-0 kubenswrapper[7604]: I0309 16:32:29.945653 7604 scope.go:117] "RemoveContainer" containerID="f8200a1495a7e1c37d6537537ac72284b2e4af062cfb0a0dbced10da1379a3d0" Mar 09 16:32:29.946066 master-0 kubenswrapper[7604]: E0309 16:32:29.946029 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:32:30.233755 master-0 kubenswrapper[7604]: I0309 16:32:30.233552 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:30.239015 master-0 kubenswrapper[7604]: I0309 16:32:30.238946 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:30.499528 master-0 kubenswrapper[7604]: I0309 16:32:30.499329 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:32:30.728760 master-0 kubenswrapper[7604]: I0309 16:32:30.728673 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:30.728760 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:30.728760 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:30.728760 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:30.729167 master-0 kubenswrapper[7604]: I0309 16:32:30.728782 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:30.931887 master-0 kubenswrapper[7604]: I0309 16:32:30.931807 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nxtms"] Mar 09 16:32:30.935335 master-0 kubenswrapper[7604]: W0309 16:32:30.935273 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f0164f_0875_4668_b155_df69e05e8ae0.slice/crio-fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808 WatchSource:0}: Error finding container fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808: Status 404 returned error can't find the container with id fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808 Mar 09 16:32:30.953915 master-0 kubenswrapper[7604]: I0309 16:32:30.953255 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nxtms" event={"ID":"18f0164f-0875-4668-b155-df69e05e8ae0","Type":"ContainerStarted","Data":"fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808"} Mar 09 16:32:30.956596 master-0 kubenswrapper[7604]: I0309 16:32:30.956538 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/1.log" Mar 09 16:32:31.731466 master-0 kubenswrapper[7604]: I0309 16:32:31.728847 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:31.731466 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:31.731466 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:31.731466 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:31.731466 master-0 kubenswrapper[7604]: I0309 16:32:31.728938 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:31.977097 master-0 kubenswrapper[7604]: I0309 16:32:31.977015 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nxtms" event={"ID":"18f0164f-0875-4668-b155-df69e05e8ae0","Type":"ContainerStarted","Data":"a5eb24f9a30a3ab1a261eb519c236de30c3ea61bcd3422f3827278ae548ba176"} Mar 09 16:32:31.995065 master-0 kubenswrapper[7604]: I0309 16:32:31.994881 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nxtms" podStartSLOduration=2.9948569579999997 podStartE2EDuration="2.994856958s" podCreationTimestamp="2026-03-09 16:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:32:31.99281224 +0000 UTC m=+409.046781683" watchObservedRunningTime="2026-03-09 16:32:31.994856958 +0000 UTC m=+409.048826381" Mar 09 16:32:32.728240 master-0 kubenswrapper[7604]: I0309 16:32:32.728127 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:32.728240 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:32.728240 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:32.728240 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:32.728616 master-0 kubenswrapper[7604]: I0309 16:32:32.728267 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:33.728306 master-0 kubenswrapper[7604]: I0309 16:32:33.728225 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:33.728306 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:33.728306 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:33.728306 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:33.729043 master-0 kubenswrapper[7604]: I0309 16:32:33.728330 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:34.728655 master-0 kubenswrapper[7604]: I0309 16:32:34.728577 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:34.728655 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:34.728655 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:34.728655 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:34.729539 master-0 kubenswrapper[7604]: I0309 16:32:34.728671 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:35.729042 master-0 kubenswrapper[7604]: I0309 16:32:35.728968 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:35.729042 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:35.729042 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:35.729042 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:35.730095 master-0 kubenswrapper[7604]: I0309 16:32:35.729073 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:36.728598 master-0 kubenswrapper[7604]: I0309 16:32:36.728520 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:36.728598 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:36.728598 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:36.728598 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:36.728598 master-0 kubenswrapper[7604]: I0309 16:32:36.728607 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:37.728485 master-0 kubenswrapper[7604]: I0309 16:32:37.728412 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:37.728485 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:37.728485 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:37.728485 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:37.729133 master-0 kubenswrapper[7604]: I0309 16:32:37.728501 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:38.728359 master-0 kubenswrapper[7604]: I0309 16:32:38.728280 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:38.728359 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:38.728359 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:38.728359 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:38.729395 master-0 kubenswrapper[7604]: I0309 16:32:38.728368 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:39.729160 master-0 kubenswrapper[7604]: I0309 16:32:39.729040 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:39.729160 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:39.729160 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:39.729160 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:39.729786 master-0 kubenswrapper[7604]: I0309 16:32:39.729174 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:40.728075 master-0 kubenswrapper[7604]: I0309 16:32:40.727986 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:40.728075 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:40.728075 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:40.728075 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:40.728075 master-0 kubenswrapper[7604]: I0309 16:32:40.728082 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:41.729547 master-0 kubenswrapper[7604]: I0309 16:32:41.729448 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:41.729547 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:41.729547 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:41.729547 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:41.730595 master-0 kubenswrapper[7604]: I0309 16:32:41.729566 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:42.730515 master-0 kubenswrapper[7604]: I0309 16:32:42.730444 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:42.730515 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:42.730515 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:42.730515 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:42.731099 master-0 kubenswrapper[7604]: I0309 16:32:42.730528 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:43.728139 master-0 kubenswrapper[7604]: I0309 16:32:43.728064 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:43.728139 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:43.728139 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:43.728139 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:43.728486 master-0 kubenswrapper[7604]: I0309 16:32:43.728150 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:44.111243 master-0 kubenswrapper[7604]: I0309 16:32:44.111111 7604 scope.go:117] "RemoveContainer" containerID="f8200a1495a7e1c37d6537537ac72284b2e4af062cfb0a0dbced10da1379a3d0" Mar 09 16:32:44.727339 master-0 kubenswrapper[7604]: I0309 16:32:44.727247 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:44.727339 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:44.727339 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:44.727339 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:44.727339 master-0 kubenswrapper[7604]: I0309 16:32:44.727316 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:45.074005 master-0 kubenswrapper[7604]: I0309 16:32:45.073830 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/1.log" Mar 09 16:32:45.074391 master-0 kubenswrapper[7604]: I0309 16:32:45.074342 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44"} Mar 09 16:32:45.728698 master-0 kubenswrapper[7604]: I0309 16:32:45.728619 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:45.728698 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:45.728698 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:45.728698 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:45.728698 master-0 kubenswrapper[7604]: I0309 16:32:45.728693 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:46.728303 master-0 kubenswrapper[7604]: I0309 16:32:46.728236 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:46.728303 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:46.728303 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:46.728303 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:46.728831 master-0 kubenswrapper[7604]: I0309 16:32:46.728788 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:47.728080 master-0 kubenswrapper[7604]: I0309 16:32:47.727995 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:47.728080 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:47.728080 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:47.728080 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:47.728537 master-0 kubenswrapper[7604]: I0309 16:32:47.728087 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:48.729309 master-0 kubenswrapper[7604]: I0309 16:32:48.729211 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:48.729309 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:48.729309 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:48.729309 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:48.729947 master-0 kubenswrapper[7604]: I0309 16:32:48.729335 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:49.727776 master-0 kubenswrapper[7604]: I0309 16:32:49.727687 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:49.727776 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:49.727776 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:49.727776 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:49.727776 master-0 kubenswrapper[7604]: I0309 16:32:49.727769 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:50.729203 master-0 kubenswrapper[7604]: I0309 16:32:50.729116 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:50.729203 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:50.729203 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:50.729203 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:50.729203 master-0 kubenswrapper[7604]: I0309 16:32:50.729201 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:51.729010 master-0 kubenswrapper[7604]: I0309 16:32:51.728927 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:51.729010 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:51.729010 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:51.729010 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:51.729668 master-0 kubenswrapper[7604]: I0309 16:32:51.729018 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:52.728503 master-0 kubenswrapper[7604]: I0309 16:32:52.728415 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:52.728503 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:52.728503 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:52.728503 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:52.728839 master-0 kubenswrapper[7604]: I0309 16:32:52.728516 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:53.728770 master-0 kubenswrapper[7604]: I0309 16:32:53.728681 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:53.728770 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:53.728770 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:53.728770 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:53.729657 master-0 kubenswrapper[7604]: I0309 16:32:53.728787 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:54.729007 master-0 kubenswrapper[7604]: I0309 16:32:54.728911 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:54.729007 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:54.729007 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:54.729007 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:54.729929 master-0 kubenswrapper[7604]: I0309 16:32:54.729026 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:55.728826 master-0 kubenswrapper[7604]: I0309 16:32:55.728738 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:55.728826 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:55.728826 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:55.728826 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:55.729676 master-0 kubenswrapper[7604]: I0309 16:32:55.728839 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:56.729150 master-0 kubenswrapper[7604]: I0309 16:32:56.729071 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:56.729150 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:56.729150 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:56.729150 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:56.729881 master-0 kubenswrapper[7604]: I0309 16:32:56.729182 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:57.728383 master-0 kubenswrapper[7604]: I0309 16:32:57.728304 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:57.728383 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:57.728383 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:57.728383 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:57.728817 master-0 kubenswrapper[7604]: I0309 16:32:57.728385 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:58.727723 master-0 kubenswrapper[7604]: I0309 16:32:58.727649 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:58.727723 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:58.727723 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:58.727723 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:58.728678 master-0 kubenswrapper[7604]: I0309 16:32:58.728588 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:32:59.728589 master-0 kubenswrapper[7604]: I0309 16:32:59.728488 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:32:59.728589 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:32:59.728589 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:32:59.728589 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:32:59.729455 master-0 kubenswrapper[7604]: I0309 16:32:59.728623 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:00.728933 master-0 kubenswrapper[7604]: I0309 16:33:00.728831 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:00.728933 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:00.728933 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:00.728933 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:00.729732 master-0 kubenswrapper[7604]: I0309 16:33:00.728934 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:01.728527 master-0 kubenswrapper[7604]: I0309 16:33:01.728439 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:01.728527 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:01.728527 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:01.728527 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:01.728527 master-0 kubenswrapper[7604]: I0309 16:33:01.728521 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:02.728461 master-0 kubenswrapper[7604]: I0309 16:33:02.728360 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:02.728461 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:02.728461 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:02.728461 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:02.728902 master-0 kubenswrapper[7604]: I0309 16:33:02.728473 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:03.729486 master-0 kubenswrapper[7604]: I0309 16:33:03.729282 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:03.729486 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:03.729486 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:03.729486 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:03.729486 master-0 kubenswrapper[7604]: I0309 16:33:03.729459 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:04.728985 master-0 kubenswrapper[7604]: I0309 16:33:04.728914 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:04.728985 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:04.728985 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:04.728985 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:04.729522 master-0 kubenswrapper[7604]: I0309 16:33:04.728993 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:05.729211 master-0 kubenswrapper[7604]: I0309 16:33:05.729125 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:05.729211 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:05.729211 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:05.729211 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:05.730036 master-0 kubenswrapper[7604]: I0309 16:33:05.729224 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:06.728928 master-0 kubenswrapper[7604]: I0309 16:33:06.728832 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:06.728928 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:06.728928 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:06.728928 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:06.729345 master-0 kubenswrapper[7604]: I0309 16:33:06.728934 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:07.729479 master-0 kubenswrapper[7604]: I0309 16:33:07.728885 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:07.729479 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:07.729479 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:07.729479 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:07.729479 master-0 kubenswrapper[7604]: I0309 16:33:07.729027 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:08.728270 master-0 kubenswrapper[7604]: I0309 16:33:08.728182 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:08.728270 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:08.728270 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:08.728270 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:08.728731 master-0 kubenswrapper[7604]: I0309 16:33:08.728280 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:09.728200 master-0 kubenswrapper[7604]: I0309 16:33:09.728068 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:09.728200 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:09.728200 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:09.728200 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:09.728200 master-0 kubenswrapper[7604]: I0309 16:33:09.728191 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:10.728301 master-0 kubenswrapper[7604]: I0309 16:33:10.728166 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:10.728301 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:10.728301 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:10.728301 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:10.728301 master-0 kubenswrapper[7604]: I0309 16:33:10.728283 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:11.729022 master-0 kubenswrapper[7604]: I0309 16:33:11.728931 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:11.729022 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:11.729022 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:11.729022 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:11.729964 master-0 kubenswrapper[7604]: I0309 16:33:11.729048 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:12.729599 master-0 kubenswrapper[7604]: I0309 16:33:12.729525 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:12.729599 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:12.729599 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:12.729599 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:12.730318 master-0 kubenswrapper[7604]: I0309 16:33:12.730289 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:13.728954 master-0 kubenswrapper[7604]: I0309 16:33:13.728868 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:13.728954 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:13.728954 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:13.728954 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:13.729418 master-0 kubenswrapper[7604]: I0309 16:33:13.728969 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:14.729367 master-0 kubenswrapper[7604]: I0309 16:33:14.729284 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:14.729367 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:14.729367 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:14.729367 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:14.730362 master-0 kubenswrapper[7604]: I0309 16:33:14.729397 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:15.570034 master-0 kubenswrapper[7604]: I0309 16:33:15.569952 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d49b645c4-2hd5r"] Mar 09 16:33:15.570394 master-0 kubenswrapper[7604]: I0309 16:33:15.570267 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" podUID="7b7d1963-c3f0-42bc-8720-426927a37a47" containerName="controller-manager" containerID="cri-o://15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d" gracePeriod=30 Mar 09 16:33:15.643803 master-0 kubenswrapper[7604]: I0309 16:33:15.643740 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw"] Mar 09 16:33:15.644098 master-0 kubenswrapper[7604]: I0309 16:33:15.644051 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" podUID="067290d0-06ec-4bb5-8618-b7b52a8b6bb1" containerName="route-controller-manager" containerID="cri-o://5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939" gracePeriod=30 Mar 09 16:33:15.729800 master-0 kubenswrapper[7604]: I0309 16:33:15.729713 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:15.729800 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:15.729800 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:15.729800 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:15.730395 master-0 kubenswrapper[7604]: I0309 16:33:15.729845 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:16.012584 master-0 kubenswrapper[7604]: I0309 16:33:16.012489 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:33:16.087062 master-0 kubenswrapper[7604]: I0309 16:33:16.087016 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:33:16.159219 master-0 kubenswrapper[7604]: I0309 16:33:16.159034 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85vj\" (UniqueName: \"kubernetes.io/projected/7b7d1963-c3f0-42bc-8720-426927a37a47-kube-api-access-t85vj\") pod \"7b7d1963-c3f0-42bc-8720-426927a37a47\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " Mar 09 16:33:16.159219 master-0 kubenswrapper[7604]: I0309 16:33:16.159134 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-config\") pod \"7b7d1963-c3f0-42bc-8720-426927a37a47\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " Mar 09 16:33:16.159725 master-0 kubenswrapper[7604]: I0309 16:33:16.159376 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7d1963-c3f0-42bc-8720-426927a37a47-serving-cert\") pod \"7b7d1963-c3f0-42bc-8720-426927a37a47\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " Mar 09 16:33:16.159725 master-0 kubenswrapper[7604]: I0309 16:33:16.159450 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-proxy-ca-bundles\") pod \"7b7d1963-c3f0-42bc-8720-426927a37a47\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " Mar 09 16:33:16.159725 master-0 kubenswrapper[7604]: I0309 16:33:16.159496 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-client-ca\") pod \"7b7d1963-c3f0-42bc-8720-426927a37a47\" (UID: \"7b7d1963-c3f0-42bc-8720-426927a37a47\") " Mar 09 16:33:16.161516 master-0 kubenswrapper[7604]: I0309 16:33:16.161339 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b7d1963-c3f0-42bc-8720-426927a37a47" (UID: "7b7d1963-c3f0-42bc-8720-426927a37a47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:33:16.161858 master-0 kubenswrapper[7604]: I0309 16:33:16.161681 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7b7d1963-c3f0-42bc-8720-426927a37a47" (UID: "7b7d1963-c3f0-42bc-8720-426927a37a47"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:33:16.161858 master-0 kubenswrapper[7604]: I0309 16:33:16.161757 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-config" (OuterVolumeSpecName: "config") pod "7b7d1963-c3f0-42bc-8720-426927a37a47" (UID: "7b7d1963-c3f0-42bc-8720-426927a37a47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:33:16.162361 master-0 kubenswrapper[7604]: I0309 16:33:16.162300 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7d1963-c3f0-42bc-8720-426927a37a47-kube-api-access-t85vj" (OuterVolumeSpecName: "kube-api-access-t85vj") pod "7b7d1963-c3f0-42bc-8720-426927a37a47" (UID: "7b7d1963-c3f0-42bc-8720-426927a37a47"). InnerVolumeSpecName "kube-api-access-t85vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:33:16.163228 master-0 kubenswrapper[7604]: I0309 16:33:16.163152 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7d1963-c3f0-42bc-8720-426927a37a47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b7d1963-c3f0-42bc-8720-426927a37a47" (UID: "7b7d1963-c3f0-42bc-8720-426927a37a47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:33:16.261757 master-0 kubenswrapper[7604]: I0309 16:33:16.261682 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-serving-cert\") pod \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " Mar 09 16:33:16.261972 master-0 kubenswrapper[7604]: I0309 16:33:16.261807 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-config\") pod \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " Mar 09 16:33:16.261972 master-0 kubenswrapper[7604]: I0309 16:33:16.261833 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-client-ca\") pod \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " Mar 09 16:33:16.261972 master-0 kubenswrapper[7604]: I0309 16:33:16.261903 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thtt6\" (UniqueName: \"kubernetes.io/projected/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-kube-api-access-thtt6\") pod \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\" (UID: \"067290d0-06ec-4bb5-8618-b7b52a8b6bb1\") " Mar 09 16:33:16.262271 master-0 kubenswrapper[7604]: I0309 16:33:16.262235 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85vj\" (UniqueName: \"kubernetes.io/projected/7b7d1963-c3f0-42bc-8720-426927a37a47-kube-api-access-t85vj\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.262271 master-0 kubenswrapper[7604]: I0309 16:33:16.262260 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.262271 master-0 kubenswrapper[7604]: I0309 16:33:16.262272 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7d1963-c3f0-42bc-8720-426927a37a47-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.262413 master-0 kubenswrapper[7604]: I0309 16:33:16.262285 7604 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.262413 master-0 kubenswrapper[7604]: I0309 16:33:16.262298 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7d1963-c3f0-42bc-8720-426927a37a47-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.262697 master-0 kubenswrapper[7604]: I0309 16:33:16.262628 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-client-ca" (OuterVolumeSpecName: "client-ca") pod "067290d0-06ec-4bb5-8618-b7b52a8b6bb1" (UID: "067290d0-06ec-4bb5-8618-b7b52a8b6bb1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:33:16.262739 master-0 kubenswrapper[7604]: I0309 16:33:16.262647 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-config" (OuterVolumeSpecName: "config") pod "067290d0-06ec-4bb5-8618-b7b52a8b6bb1" (UID: "067290d0-06ec-4bb5-8618-b7b52a8b6bb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:33:16.265198 master-0 kubenswrapper[7604]: I0309 16:33:16.265089 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-kube-api-access-thtt6" (OuterVolumeSpecName: "kube-api-access-thtt6") pod "067290d0-06ec-4bb5-8618-b7b52a8b6bb1" (UID: "067290d0-06ec-4bb5-8618-b7b52a8b6bb1"). InnerVolumeSpecName "kube-api-access-thtt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:33:16.266227 master-0 kubenswrapper[7604]: I0309 16:33:16.266152 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "067290d0-06ec-4bb5-8618-b7b52a8b6bb1" (UID: "067290d0-06ec-4bb5-8618-b7b52a8b6bb1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:33:16.297563 master-0 kubenswrapper[7604]: I0309 16:33:16.297452 7604 generic.go:334] "Generic (PLEG): container finished" podID="7b7d1963-c3f0-42bc-8720-426927a37a47" containerID="15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d" exitCode=0 Mar 09 16:33:16.297563 master-0 kubenswrapper[7604]: I0309 16:33:16.297520 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" Mar 09 16:33:16.297940 master-0 kubenswrapper[7604]: I0309 16:33:16.297581 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" event={"ID":"7b7d1963-c3f0-42bc-8720-426927a37a47","Type":"ContainerDied","Data":"15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d"} Mar 09 16:33:16.297940 master-0 kubenswrapper[7604]: I0309 16:33:16.297743 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d49b645c4-2hd5r" event={"ID":"7b7d1963-c3f0-42bc-8720-426927a37a47","Type":"ContainerDied","Data":"db3ce33d227af9c594dddc7530e159f986bfcc3583631b361184c95de3a6f124"} Mar 09 16:33:16.297940 master-0 kubenswrapper[7604]: I0309 16:33:16.297779 7604 scope.go:117] "RemoveContainer" containerID="15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d" Mar 09 16:33:16.300814 master-0 kubenswrapper[7604]: I0309 16:33:16.300158 7604 generic.go:334] "Generic (PLEG): container finished" podID="067290d0-06ec-4bb5-8618-b7b52a8b6bb1" containerID="5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939" exitCode=0 Mar 09 16:33:16.300814 master-0 kubenswrapper[7604]: I0309 16:33:16.300198 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" event={"ID":"067290d0-06ec-4bb5-8618-b7b52a8b6bb1","Type":"ContainerDied","Data":"5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939"} Mar 09 16:33:16.300814 master-0 kubenswrapper[7604]: I0309 16:33:16.300236 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" event={"ID":"067290d0-06ec-4bb5-8618-b7b52a8b6bb1","Type":"ContainerDied","Data":"8c126274065003e766bcdae94018421423730b127196a7f83e555d62d1340c2b"} Mar 09 16:33:16.300814 master-0 kubenswrapper[7604]: I0309 16:33:16.300319 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw" Mar 09 16:33:16.322078 master-0 kubenswrapper[7604]: I0309 16:33:16.322036 7604 scope.go:117] "RemoveContainer" containerID="15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d" Mar 09 16:33:16.323011 master-0 kubenswrapper[7604]: E0309 16:33:16.322958 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d\": container with ID starting with 15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d not found: ID does not exist" containerID="15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d" Mar 09 16:33:16.323096 master-0 kubenswrapper[7604]: I0309 16:33:16.323001 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d"} err="failed to get container status \"15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d\": rpc error: code = NotFound desc = could not find container \"15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d\": container with ID starting with 15a841743c32973661dbfe2ed863d50d9189d8c3167105c085d3bcfad9bc8a4d not found: ID does not exist" Mar 09 16:33:16.323096 master-0 kubenswrapper[7604]: I0309 16:33:16.323029 7604 scope.go:117] "RemoveContainer" containerID="5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939" Mar 09 16:33:16.340916 master-0 kubenswrapper[7604]: I0309 16:33:16.340833 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d49b645c4-2hd5r"] Mar 09 16:33:16.351620 master-0 kubenswrapper[7604]: I0309 16:33:16.351544 7604 scope.go:117] "RemoveContainer" containerID="5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939" Mar 09 16:33:16.353615 master-0 kubenswrapper[7604]: E0309 16:33:16.353549 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939\": container with ID starting with 5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939 not found: ID does not exist" containerID="5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939" Mar 09 16:33:16.353775 master-0 kubenswrapper[7604]: I0309 16:33:16.353664 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939"} err="failed to get container status \"5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939\": rpc error: code = NotFound desc = could not find container \"5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939\": container with ID starting with 5c00896b50f61e7c39f4f0eac2566b284bac7e193cd980d6f60d46ca5f280939 not found: ID does not exist" Mar 09 16:33:16.354717 master-0 kubenswrapper[7604]: I0309 16:33:16.354664 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d49b645c4-2hd5r"] Mar 09 16:33:16.366708 master-0 kubenswrapper[7604]: I0309 16:33:16.366612 7604 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.367290 master-0 kubenswrapper[7604]: I0309 16:33:16.367240 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.367398 master-0 kubenswrapper[7604]: I0309 16:33:16.367388 7604 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.367552 master-0 kubenswrapper[7604]: I0309 16:33:16.367536 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thtt6\" (UniqueName: \"kubernetes.io/projected/067290d0-06ec-4bb5-8618-b7b52a8b6bb1-kube-api-access-thtt6\") on node \"master-0\" DevicePath \"\"" Mar 09 16:33:16.369816 master-0 kubenswrapper[7604]: I0309 16:33:16.369748 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw"] Mar 09 16:33:16.374060 master-0 kubenswrapper[7604]: I0309 16:33:16.373948 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b6f88c7d8-qqvsw"] Mar 09 16:33:16.727915 master-0 kubenswrapper[7604]: I0309 16:33:16.727868 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:16.727915 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:16.727915 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:16.727915 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:16.728656 master-0 kubenswrapper[7604]: I0309 16:33:16.728506 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:17.120404 master-0 kubenswrapper[7604]: I0309 16:33:17.120322 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067290d0-06ec-4bb5-8618-b7b52a8b6bb1" path="/var/lib/kubelet/pods/067290d0-06ec-4bb5-8618-b7b52a8b6bb1/volumes" Mar 09 16:33:17.121369 master-0 kubenswrapper[7604]: I0309 16:33:17.121216 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7d1963-c3f0-42bc-8720-426927a37a47" path="/var/lib/kubelet/pods/7b7d1963-c3f0-42bc-8720-426927a37a47/volumes" Mar 09 16:33:17.197036 master-0 kubenswrapper[7604]: I0309 16:33:17.196961 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c5964c98f-tm4pb"] Mar 09 16:33:17.197954 master-0 kubenswrapper[7604]: E0309 16:33:17.197929 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7d1963-c3f0-42bc-8720-426927a37a47" containerName="controller-manager" Mar 09 16:33:17.198064 master-0 kubenswrapper[7604]: I0309 16:33:17.198048 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7d1963-c3f0-42bc-8720-426927a37a47" containerName="controller-manager" Mar 09 16:33:17.198152 master-0 kubenswrapper[7604]: E0309 16:33:17.198139 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067290d0-06ec-4bb5-8618-b7b52a8b6bb1" containerName="route-controller-manager" Mar 09 16:33:17.198224 master-0 kubenswrapper[7604]: I0309 16:33:17.198213 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="067290d0-06ec-4bb5-8618-b7b52a8b6bb1" containerName="route-controller-manager" Mar 09 16:33:17.198494 master-0 kubenswrapper[7604]: I0309 16:33:17.198478 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7d1963-c3f0-42bc-8720-426927a37a47" containerName="controller-manager" Mar 09 16:33:17.198591 master-0 kubenswrapper[7604]: I0309 16:33:17.198577 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="067290d0-06ec-4bb5-8618-b7b52a8b6bb1" containerName="route-controller-manager" Mar 09 16:33:17.199223 master-0 kubenswrapper[7604]: I0309 16:33:17.199204 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb"] Mar 09 16:33:17.199622 master-0 kubenswrapper[7604]: I0309 16:33:17.199567 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.200085 master-0 kubenswrapper[7604]: I0309 16:33:17.200062 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.203738 master-0 kubenswrapper[7604]: I0309 16:33:17.203165 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-4n2zt" Mar 09 16:33:17.203738 master-0 kubenswrapper[7604]: I0309 16:33:17.203187 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:33:17.203738 master-0 kubenswrapper[7604]: I0309 16:33:17.203315 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:33:17.203738 master-0 kubenswrapper[7604]: I0309 16:33:17.203507 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:33:17.203738 master-0 kubenswrapper[7604]: I0309 16:33:17.203625 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:33:17.203738 master-0 kubenswrapper[7604]: I0309 16:33:17.203655 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:33:17.204017 master-0 kubenswrapper[7604]: I0309 16:33:17.203894 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:33:17.205237 master-0 kubenswrapper[7604]: I0309 16:33:17.204055 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58glv" Mar 09 16:33:17.205237 master-0 kubenswrapper[7604]: I0309 16:33:17.204142 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:33:17.205237 master-0 kubenswrapper[7604]: I0309 16:33:17.204280 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:33:17.205237 master-0 kubenswrapper[7604]: I0309 16:33:17.204328 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:33:17.206127 master-0 kubenswrapper[7604]: I0309 16:33:17.205917 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:33:17.214968 master-0 kubenswrapper[7604]: I0309 16:33:17.214895 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c5964c98f-tm4pb"] Mar 09 16:33:17.215327 master-0 kubenswrapper[7604]: I0309 16:33:17.215298 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:33:17.218759 master-0 kubenswrapper[7604]: I0309 16:33:17.218680 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb"] Mar 09 16:33:17.382332 master-0 kubenswrapper[7604]: I0309 16:33:17.382076 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.382332 master-0 kubenswrapper[7604]: I0309 16:33:17.382146 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.382332 master-0 kubenswrapper[7604]: I0309 16:33:17.382195 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.382332 master-0 kubenswrapper[7604]: I0309 16:33:17.382231 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.382332 master-0 kubenswrapper[7604]: I0309 16:33:17.382279 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.382332 master-0 kubenswrapper[7604]: I0309 16:33:17.382338 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.383049 master-0 kubenswrapper[7604]: I0309 16:33:17.382374 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.383049 master-0 kubenswrapper[7604]: I0309 16:33:17.382398 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.383049 master-0 kubenswrapper[7604]: I0309 16:33:17.382443 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.484780 master-0 kubenswrapper[7604]: I0309 16:33:17.484689 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.484780 master-0 kubenswrapper[7604]: I0309 16:33:17.484787 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.485198 master-0 kubenswrapper[7604]: I0309 16:33:17.485002 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.485198 master-0 kubenswrapper[7604]: I0309 16:33:17.485099 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.485389 master-0 kubenswrapper[7604]: I0309 16:33:17.485352 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.485493 master-0 kubenswrapper[7604]: I0309 16:33:17.485411 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.485567 master-0 kubenswrapper[7604]: I0309 16:33:17.485525 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.485637 master-0 kubenswrapper[7604]: I0309 16:33:17.485599 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.485706 master-0 kubenswrapper[7604]: I0309 16:33:17.485675 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.486002 master-0 kubenswrapper[7604]: I0309 16:33:17.485964 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.486544 master-0 kubenswrapper[7604]: I0309 16:33:17.486512 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.489017 master-0 kubenswrapper[7604]: I0309 16:33:17.487489 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.489017 master-0 kubenswrapper[7604]: I0309 16:33:17.487786 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.489017 master-0 kubenswrapper[7604]: I0309 16:33:17.488616 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.490337 master-0 kubenswrapper[7604]: I0309 16:33:17.489341 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.491554 master-0 kubenswrapper[7604]: I0309 16:33:17.491393 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.504074 master-0 kubenswrapper[7604]: I0309 16:33:17.504003 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.504846 master-0 kubenswrapper[7604]: I0309 16:33:17.504812 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.532233 master-0 kubenswrapper[7604]: I0309 16:33:17.532181 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:17.555089 master-0 kubenswrapper[7604]: I0309 16:33:17.555029 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:17.728800 master-0 kubenswrapper[7604]: I0309 16:33:17.728677 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:17.728800 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:17.728800 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:17.728800 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:17.729432 master-0 kubenswrapper[7604]: I0309 16:33:17.728812 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:17.989132 master-0 kubenswrapper[7604]: I0309 16:33:17.989088 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c5964c98f-tm4pb"] Mar 09 16:33:17.999936 master-0 kubenswrapper[7604]: W0309 16:33:17.999862 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d1143ae_d94a_43f2_8e75_95aae13a5c57.slice/crio-54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca WatchSource:0}: Error finding container 54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca: Status 404 returned error can't find the container with id 54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca Mar 09 16:33:18.059127 master-0 kubenswrapper[7604]: I0309 16:33:18.057860 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb"] Mar 09 16:33:18.064449 master-0 kubenswrapper[7604]: W0309 16:33:18.064364 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8677cbd3_649f_41cd_8b8a_eadca971906b.slice/crio-6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02 WatchSource:0}: Error finding container 6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02: Status 404 returned error can't find the container with id 6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02 Mar 09 16:33:18.319823 master-0 kubenswrapper[7604]: I0309 16:33:18.319740 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" event={"ID":"8677cbd3-649f-41cd-8b8a-eadca971906b","Type":"ContainerStarted","Data":"58ca4bfd8d3d92cf6b0638eb596cecb093134580ce5c529622e4707ab6f67862"} Mar 09 16:33:18.319823 master-0 kubenswrapper[7604]: I0309 16:33:18.319813 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" event={"ID":"8677cbd3-649f-41cd-8b8a-eadca971906b","Type":"ContainerStarted","Data":"6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02"} Mar 09 16:33:18.321102 master-0 kubenswrapper[7604]: I0309 16:33:18.321064 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:18.321636 master-0 kubenswrapper[7604]: I0309 16:33:18.321598 7604 patch_prober.go:28] interesting pod/route-controller-manager-675f85b8f7-bt9gb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 09 16:33:18.321720 master-0 kubenswrapper[7604]: I0309 16:33:18.321662 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 09 16:33:18.322727 master-0 kubenswrapper[7604]: I0309 16:33:18.322679 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" event={"ID":"7d1143ae-d94a-43f2-8e75-95aae13a5c57","Type":"ContainerStarted","Data":"103d3eac07aecf0258cc2c832ca414dc5ada6722c47422884569884c3c3f57fc"} Mar 09 16:33:18.322727 master-0 kubenswrapper[7604]: I0309 16:33:18.322725 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" event={"ID":"7d1143ae-d94a-43f2-8e75-95aae13a5c57","Type":"ContainerStarted","Data":"54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca"} Mar 09 16:33:18.324818 master-0 kubenswrapper[7604]: I0309 16:33:18.324752 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:18.336463 master-0 kubenswrapper[7604]: I0309 16:33:18.335347 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:33:18.350285 master-0 kubenswrapper[7604]: I0309 16:33:18.350192 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" podStartSLOduration=3.350173371 podStartE2EDuration="3.350173371s" podCreationTimestamp="2026-03-09 16:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:33:18.345791866 +0000 UTC m=+455.399761309" watchObservedRunningTime="2026-03-09 16:33:18.350173371 +0000 UTC m=+455.404142794" Mar 09 16:33:18.505611 master-0 kubenswrapper[7604]: I0309 16:33:18.505336 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" podStartSLOduration=3.505320309 podStartE2EDuration="3.505320309s" podCreationTimestamp="2026-03-09 16:33:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:33:18.501926983 +0000 UTC m=+455.555896426" watchObservedRunningTime="2026-03-09 16:33:18.505320309 +0000 UTC m=+455.559289732" Mar 09 16:33:18.729171 master-0 kubenswrapper[7604]: I0309 16:33:18.729019 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:18.729171 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:18.729171 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:18.729171 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:18.729171 master-0 kubenswrapper[7604]: I0309 16:33:18.729121 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:19.334772 master-0 kubenswrapper[7604]: I0309 16:33:19.334691 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:33:19.729323 master-0 kubenswrapper[7604]: I0309 16:33:19.729252 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:19.729323 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:19.729323 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:19.729323 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:19.729798 master-0 kubenswrapper[7604]: I0309 16:33:19.729338 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:20.729844 master-0 kubenswrapper[7604]: I0309 16:33:20.729758 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:20.729844 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:20.729844 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:20.729844 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:20.730772 master-0 kubenswrapper[7604]: I0309 16:33:20.729871 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:21.728708 master-0 kubenswrapper[7604]: I0309 16:33:21.728607 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:21.728708 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:21.728708 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:21.728708 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:21.729351 master-0 kubenswrapper[7604]: I0309 16:33:21.728751 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:22.728547 master-0 kubenswrapper[7604]: I0309 16:33:22.728470 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:22.728547 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:22.728547 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:22.728547 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:22.729416 master-0 kubenswrapper[7604]: I0309 16:33:22.728578 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:23.728673 master-0 kubenswrapper[7604]: I0309 16:33:23.728580 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:23.728673 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:23.728673 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:23.728673 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:23.729608 master-0 kubenswrapper[7604]: I0309 16:33:23.728693 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:24.728617 master-0 kubenswrapper[7604]: I0309 16:33:24.728529 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:24.728617 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:24.728617 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:24.728617 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:24.729635 master-0 kubenswrapper[7604]: I0309 16:33:24.728643 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:25.731225 master-0 kubenswrapper[7604]: I0309 16:33:25.731139 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:25.731225 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:25.731225 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:25.731225 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:25.731997 master-0 kubenswrapper[7604]: I0309 16:33:25.731242 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:26.727678 master-0 kubenswrapper[7604]: I0309 16:33:26.727555 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:26.727678 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:26.727678 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:26.727678 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:26.727678 master-0 kubenswrapper[7604]: I0309 16:33:26.727621 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:27.728953 master-0 kubenswrapper[7604]: I0309 16:33:27.728890 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:27.728953 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:27.728953 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:27.728953 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:27.729710 master-0 kubenswrapper[7604]: I0309 16:33:27.728978 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:28.727823 master-0 kubenswrapper[7604]: I0309 16:33:28.727745 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:28.727823 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:28.727823 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:28.727823 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:28.767971 master-0 kubenswrapper[7604]: I0309 16:33:28.727852 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:29.729747 master-0 kubenswrapper[7604]: I0309 16:33:29.729662 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:29.729747 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:29.729747 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:29.729747 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:29.730190 master-0 kubenswrapper[7604]: I0309 16:33:29.729764 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:30.728336 master-0 kubenswrapper[7604]: I0309 16:33:30.728244 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:30.728336 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:30.728336 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:30.728336 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:30.728983 master-0 kubenswrapper[7604]: I0309 16:33:30.728369 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:31.727732 master-0 kubenswrapper[7604]: I0309 16:33:31.727645 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:31.727732 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:31.727732 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:31.727732 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:31.728179 master-0 kubenswrapper[7604]: I0309 16:33:31.727734 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:32.728780 master-0 kubenswrapper[7604]: I0309 16:33:32.728684 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:32.728780 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:32.728780 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:32.728780 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:32.728780 master-0 kubenswrapper[7604]: I0309 16:33:32.728792 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:33.729026 master-0 kubenswrapper[7604]: I0309 16:33:33.728915 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:33.729026 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:33.729026 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:33.729026 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:33.730016 master-0 kubenswrapper[7604]: I0309 16:33:33.729041 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:34.727705 master-0 kubenswrapper[7604]: I0309 16:33:34.727645 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:34.727705 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:34.727705 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:34.727705 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:34.728145 master-0 kubenswrapper[7604]: I0309 16:33:34.728116 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:35.183357 master-0 kubenswrapper[7604]: I0309 16:33:35.183305 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rrgmr"] Mar 09 16:33:35.185130 master-0 kubenswrapper[7604]: I0309 16:33:35.185109 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.187385 master-0 kubenswrapper[7604]: I0309 16:33:35.187317 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-tdm87" Mar 09 16:33:35.192578 master-0 kubenswrapper[7604]: I0309 16:33:35.192544 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 09 16:33:35.375575 master-0 kubenswrapper[7604]: I0309 16:33:35.375462 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vgl\" (UniqueName: \"kubernetes.io/projected/ae16ddba-7385-4bde-8b7e-5be9f8106890-kube-api-access-57vgl\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.375575 master-0 kubenswrapper[7604]: I0309 16:33:35.375575 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae16ddba-7385-4bde-8b7e-5be9f8106890-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.375997 master-0 kubenswrapper[7604]: I0309 16:33:35.375721 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ae16ddba-7385-4bde-8b7e-5be9f8106890-ready\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.375997 master-0 kubenswrapper[7604]: I0309 16:33:35.375753 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae16ddba-7385-4bde-8b7e-5be9f8106890-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.477362 master-0 kubenswrapper[7604]: I0309 16:33:35.477183 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ae16ddba-7385-4bde-8b7e-5be9f8106890-ready\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.477696 master-0 kubenswrapper[7604]: I0309 16:33:35.477525 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae16ddba-7385-4bde-8b7e-5be9f8106890-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.477769 master-0 kubenswrapper[7604]: I0309 16:33:35.477698 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae16ddba-7385-4bde-8b7e-5be9f8106890-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.477856 master-0 kubenswrapper[7604]: I0309 16:33:35.477814 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57vgl\" (UniqueName: \"kubernetes.io/projected/ae16ddba-7385-4bde-8b7e-5be9f8106890-kube-api-access-57vgl\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.477922 master-0 kubenswrapper[7604]: I0309 16:33:35.477890 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae16ddba-7385-4bde-8b7e-5be9f8106890-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.477993 master-0 kubenswrapper[7604]: I0309 16:33:35.477831 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ae16ddba-7385-4bde-8b7e-5be9f8106890-ready\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.478604 master-0 kubenswrapper[7604]: I0309 16:33:35.478568 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae16ddba-7385-4bde-8b7e-5be9f8106890-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.507908 master-0 kubenswrapper[7604]: I0309 16:33:35.507842 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57vgl\" (UniqueName: \"kubernetes.io/projected/ae16ddba-7385-4bde-8b7e-5be9f8106890-kube-api-access-57vgl\") pod \"cni-sysctl-allowlist-ds-rrgmr\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.728577 master-0 kubenswrapper[7604]: I0309 16:33:35.728309 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:35.728577 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:35.728577 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:35.728577 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:35.728577 master-0 kubenswrapper[7604]: I0309 16:33:35.728411 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:35.803125 master-0 kubenswrapper[7604]: I0309 16:33:35.803046 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:35.829014 master-0 kubenswrapper[7604]: W0309 16:33:35.828937 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae16ddba_7385_4bde_8b7e_5be9f8106890.slice/crio-d7038c8b74d085eeb74dcf495b5b2f83248e510519cd54fbf051d1dea5e66f5f WatchSource:0}: Error finding container d7038c8b74d085eeb74dcf495b5b2f83248e510519cd54fbf051d1dea5e66f5f: Status 404 returned error can't find the container with id d7038c8b74d085eeb74dcf495b5b2f83248e510519cd54fbf051d1dea5e66f5f Mar 09 16:33:36.446663 master-0 kubenswrapper[7604]: I0309 16:33:36.446604 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" event={"ID":"ae16ddba-7385-4bde-8b7e-5be9f8106890","Type":"ContainerStarted","Data":"2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776"} Mar 09 16:33:36.447327 master-0 kubenswrapper[7604]: I0309 16:33:36.447306 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" event={"ID":"ae16ddba-7385-4bde-8b7e-5be9f8106890","Type":"ContainerStarted","Data":"d7038c8b74d085eeb74dcf495b5b2f83248e510519cd54fbf051d1dea5e66f5f"} Mar 09 16:33:36.447826 master-0 kubenswrapper[7604]: I0309 16:33:36.447751 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:36.466840 master-0 kubenswrapper[7604]: I0309 16:33:36.466754 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" podStartSLOduration=1.466732227 podStartE2EDuration="1.466732227s" podCreationTimestamp="2026-03-09 16:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:33:36.462471686 +0000 UTC m=+473.516441119" watchObservedRunningTime="2026-03-09 16:33:36.466732227 +0000 UTC m=+473.520701650" Mar 09 16:33:36.728256 master-0 kubenswrapper[7604]: I0309 16:33:36.728091 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:36.728256 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:36.728256 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:36.728256 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:36.728256 master-0 kubenswrapper[7604]: I0309 16:33:36.728166 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:37.468713 master-0 kubenswrapper[7604]: I0309 16:33:37.468626 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:33:37.728912 master-0 kubenswrapper[7604]: I0309 16:33:37.728751 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:37.728912 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:37.728912 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:37.728912 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:37.728912 master-0 kubenswrapper[7604]: I0309 16:33:37.728828 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:37.858833 master-0 kubenswrapper[7604]: I0309 16:33:37.858724 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-gwf86"] Mar 09 16:33:37.860392 master-0 kubenswrapper[7604]: I0309 16:33:37.860349 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:37.863078 master-0 kubenswrapper[7604]: I0309 16:33:37.863029 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 09 16:33:37.863383 master-0 kubenswrapper[7604]: I0309 16:33:37.863353 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 09 16:33:37.863617 master-0 kubenswrapper[7604]: I0309 16:33:37.863593 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-hhfdt" Mar 09 16:33:37.863761 master-0 kubenswrapper[7604]: I0309 16:33:37.863741 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 09 16:33:37.863875 master-0 kubenswrapper[7604]: I0309 16:33:37.863857 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 09 16:33:37.864850 master-0 kubenswrapper[7604]: I0309 16:33:37.864807 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 09 16:33:37.871854 master-0 kubenswrapper[7604]: I0309 16:33:37.871792 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-gwf86"] Mar 09 16:33:37.872286 master-0 kubenswrapper[7604]: I0309 16:33:37.871922 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 09 16:33:38.014497 master-0 kubenswrapper[7604]: I0309 16:33:38.014230 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.014497 master-0 kubenswrapper[7604]: I0309 16:33:38.014332 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-federate-client-tls\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.014910 master-0 kubenswrapper[7604]: I0309 16:33:38.014576 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.014910 master-0 kubenswrapper[7604]: I0309 16:33:38.014728 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.014910 master-0 kubenswrapper[7604]: I0309 16:33:38.014761 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.014910 master-0 kubenswrapper[7604]: I0309 16:33:38.014793 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.014910 master-0 kubenswrapper[7604]: I0309 16:33:38.014890 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.015070 master-0 kubenswrapper[7604]: I0309 16:33:38.015004 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lwx\" (UniqueName: \"kubernetes.io/projected/268b582b-efd2-44be-9e2a-3ee7322603c9-kube-api-access-q9lwx\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.116520 master-0 kubenswrapper[7604]: I0309 16:33:38.116447 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.116520 master-0 kubenswrapper[7604]: I0309 16:33:38.116501 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.116520 master-0 kubenswrapper[7604]: I0309 16:33:38.116529 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.117074 master-0 kubenswrapper[7604]: I0309 16:33:38.116615 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.117074 master-0 kubenswrapper[7604]: I0309 16:33:38.116669 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lwx\" (UniqueName: \"kubernetes.io/projected/268b582b-efd2-44be-9e2a-3ee7322603c9-kube-api-access-q9lwx\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.117074 master-0 kubenswrapper[7604]: I0309 16:33:38.116979 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.117188 master-0 kubenswrapper[7604]: I0309 16:33:38.117119 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-federate-client-tls\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.117230 master-0 kubenswrapper[7604]: I0309 16:33:38.117206 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.118227 master-0 kubenswrapper[7604]: I0309 16:33:38.118183 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.118719 master-0 kubenswrapper[7604]: I0309 16:33:38.118676 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.118944 master-0 kubenswrapper[7604]: I0309 16:33:38.118906 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.120503 master-0 kubenswrapper[7604]: I0309 16:33:38.120261 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.121026 master-0 kubenswrapper[7604]: I0309 16:33:38.120991 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.122092 master-0 kubenswrapper[7604]: I0309 16:33:38.122046 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-federate-client-tls\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.122749 master-0 kubenswrapper[7604]: I0309 16:33:38.122707 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.137963 master-0 kubenswrapper[7604]: I0309 16:33:38.137884 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lwx\" (UniqueName: \"kubernetes.io/projected/268b582b-efd2-44be-9e2a-3ee7322603c9-kube-api-access-q9lwx\") pod \"telemeter-client-d4f6dc665-gwf86\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.181092 master-0 kubenswrapper[7604]: I0309 16:33:38.181016 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:33:38.183938 master-0 kubenswrapper[7604]: I0309 16:33:38.183896 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rrgmr"] Mar 09 16:33:38.669137 master-0 kubenswrapper[7604]: I0309 16:33:38.666520 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-gwf86"] Mar 09 16:33:38.675286 master-0 kubenswrapper[7604]: W0309 16:33:38.675216 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod268b582b_efd2_44be_9e2a_3ee7322603c9.slice/crio-cefc455c07dd55ee166873c714c208ba56515212dc9b418766d04bbb74b92132 WatchSource:0}: Error finding container cefc455c07dd55ee166873c714c208ba56515212dc9b418766d04bbb74b92132: Status 404 returned error can't find the container with id cefc455c07dd55ee166873c714c208ba56515212dc9b418766d04bbb74b92132 Mar 09 16:33:38.678146 master-0 kubenswrapper[7604]: I0309 16:33:38.678106 7604 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 16:33:38.730459 master-0 kubenswrapper[7604]: I0309 16:33:38.730357 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:38.730459 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:38.730459 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:38.730459 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:38.730459 master-0 kubenswrapper[7604]: I0309 16:33:38.730463 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:39.480577 master-0 kubenswrapper[7604]: I0309 16:33:39.480084 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerStarted","Data":"cefc455c07dd55ee166873c714c208ba56515212dc9b418766d04bbb74b92132"} Mar 09 16:33:39.480577 master-0 kubenswrapper[7604]: I0309 16:33:39.480226 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" gracePeriod=30 Mar 09 16:33:39.728723 master-0 kubenswrapper[7604]: I0309 16:33:39.728465 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:39.728723 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:39.728723 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:39.728723 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:39.728723 master-0 kubenswrapper[7604]: I0309 16:33:39.728581 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:40.727948 master-0 kubenswrapper[7604]: I0309 16:33:40.727889 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:40.727948 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:40.727948 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:40.727948 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:40.728294 master-0 kubenswrapper[7604]: I0309 16:33:40.727965 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:41.729386 master-0 kubenswrapper[7604]: I0309 16:33:41.729300 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:41.729386 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:41.729386 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:41.729386 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:41.730220 master-0 kubenswrapper[7604]: I0309 16:33:41.729402 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:42.512468 master-0 kubenswrapper[7604]: I0309 16:33:42.512352 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerStarted","Data":"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5"} Mar 09 16:33:42.728891 master-0 kubenswrapper[7604]: I0309 16:33:42.728637 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:42.728891 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:42.728891 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:42.728891 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:42.728891 master-0 kubenswrapper[7604]: I0309 16:33:42.728738 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:43.522466 master-0 kubenswrapper[7604]: I0309 16:33:43.522247 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerStarted","Data":"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9"} Mar 09 16:33:43.522466 master-0 kubenswrapper[7604]: I0309 16:33:43.522304 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerStarted","Data":"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe"} Mar 09 16:33:43.547107 master-0 kubenswrapper[7604]: I0309 16:33:43.546984 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" podStartSLOduration=2.206870052 podStartE2EDuration="6.546964892s" podCreationTimestamp="2026-03-09 16:33:37 +0000 UTC" firstStartedPulling="2026-03-09 16:33:38.6780008 +0000 UTC m=+475.731970223" lastFinishedPulling="2026-03-09 16:33:43.01809564 +0000 UTC m=+480.072065063" observedRunningTime="2026-03-09 16:33:43.543740491 +0000 UTC m=+480.597709934" watchObservedRunningTime="2026-03-09 16:33:43.546964892 +0000 UTC m=+480.600934315" Mar 09 16:33:43.728913 master-0 kubenswrapper[7604]: I0309 16:33:43.728808 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:43.728913 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:43.728913 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:43.728913 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:43.728913 master-0 kubenswrapper[7604]: I0309 16:33:43.728902 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:44.730550 master-0 kubenswrapper[7604]: I0309 16:33:44.730466 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:44.730550 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:44.730550 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:44.730550 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:44.731555 master-0 kubenswrapper[7604]: I0309 16:33:44.730564 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:44.754959 master-0 kubenswrapper[7604]: I0309 16:33:44.754850 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-jcsfw"] Mar 09 16:33:44.756332 master-0 kubenswrapper[7604]: I0309 16:33:44.756291 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:44.766648 master-0 kubenswrapper[7604]: I0309 16:33:44.766561 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-jcsfw"] Mar 09 16:33:44.770021 master-0 kubenswrapper[7604]: I0309 16:33:44.769942 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-69c4t" Mar 09 16:33:44.843399 master-0 kubenswrapper[7604]: I0309 16:33:44.843310 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:44.843757 master-0 kubenswrapper[7604]: I0309 16:33:44.843412 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26xps\" (UniqueName: \"kubernetes.io/projected/e91a0e23-c95b-4290-9c0c-29101febfc8f-kube-api-access-26xps\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:44.945748 master-0 kubenswrapper[7604]: I0309 16:33:44.945670 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26xps\" (UniqueName: \"kubernetes.io/projected/e91a0e23-c95b-4290-9c0c-29101febfc8f-kube-api-access-26xps\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:44.945967 master-0 kubenswrapper[7604]: I0309 16:33:44.945825 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:44.949897 master-0 kubenswrapper[7604]: I0309 16:33:44.949867 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:44.964926 master-0 kubenswrapper[7604]: I0309 16:33:44.964844 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26xps\" (UniqueName: \"kubernetes.io/projected/e91a0e23-c95b-4290-9c0c-29101febfc8f-kube-api-access-26xps\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:45.082075 master-0 kubenswrapper[7604]: I0309 16:33:45.081889 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:33:45.473345 master-0 kubenswrapper[7604]: I0309 16:33:45.473263 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-jcsfw"] Mar 09 16:33:45.538921 master-0 kubenswrapper[7604]: I0309 16:33:45.538861 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" event={"ID":"e91a0e23-c95b-4290-9c0c-29101febfc8f","Type":"ContainerStarted","Data":"70eddae976602b0fd7a417da85764552e2ce702063285733d01e52d020ee14c3"} Mar 09 16:33:45.728857 master-0 kubenswrapper[7604]: I0309 16:33:45.728808 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:45.728857 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:45.728857 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:45.728857 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:45.729057 master-0 kubenswrapper[7604]: I0309 16:33:45.728883 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:45.806347 master-0 kubenswrapper[7604]: E0309 16:33:45.806265 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:33:45.810546 master-0 kubenswrapper[7604]: E0309 16:33:45.810459 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:33:45.812848 master-0 kubenswrapper[7604]: E0309 16:33:45.812803 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:33:45.812934 master-0 kubenswrapper[7604]: E0309 16:33:45.812860 7604 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" Mar 09 16:33:46.548048 master-0 kubenswrapper[7604]: I0309 16:33:46.547973 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" event={"ID":"e91a0e23-c95b-4290-9c0c-29101febfc8f","Type":"ContainerStarted","Data":"fbe25b04c494c84d705ccde66178144a471d18d9794554b9466ae78c072d6f3c"} Mar 09 16:33:46.548048 master-0 kubenswrapper[7604]: I0309 16:33:46.548026 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" event={"ID":"e91a0e23-c95b-4290-9c0c-29101febfc8f","Type":"ContainerStarted","Data":"7229badbbe789476e27c07386d127a2dbda94c6a901db7f0db3114a56be3ac6d"} Mar 09 16:33:46.565244 master-0 kubenswrapper[7604]: I0309 16:33:46.565143 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" podStartSLOduration=2.56512093 podStartE2EDuration="2.56512093s" podCreationTimestamp="2026-03-09 16:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:33:46.561682492 +0000 UTC m=+483.615651935" watchObservedRunningTime="2026-03-09 16:33:46.56512093 +0000 UTC m=+483.619090353" Mar 09 16:33:46.596909 master-0 kubenswrapper[7604]: I0309 16:33:46.596821 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-g8n5t"] Mar 09 16:33:46.597267 master-0 kubenswrapper[7604]: I0309 16:33:46.597099 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="multus-admission-controller" containerID="cri-o://936e54f2dcd8b97ec29ef8044719dc7e3e661dccc2b4396664320d24598d2652" gracePeriod=30 Mar 09 16:33:46.597267 master-0 kubenswrapper[7604]: I0309 16:33:46.597165 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="kube-rbac-proxy" containerID="cri-o://2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0" gracePeriod=30 Mar 09 16:33:46.727944 master-0 kubenswrapper[7604]: I0309 16:33:46.727821 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:46.727944 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:46.727944 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:46.727944 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:46.727944 master-0 kubenswrapper[7604]: I0309 16:33:46.727884 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:47.558257 master-0 kubenswrapper[7604]: I0309 16:33:47.558187 7604 generic.go:334] "Generic (PLEG): container finished" podID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerID="2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0" exitCode=0 Mar 09 16:33:47.559058 master-0 kubenswrapper[7604]: I0309 16:33:47.558277 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" event={"ID":"4bd3c489-427c-4a47-b7b9-5d1611b9be12","Type":"ContainerDied","Data":"2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0"} Mar 09 16:33:47.728454 master-0 kubenswrapper[7604]: I0309 16:33:47.728356 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:47.728454 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:47.728454 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:47.728454 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:47.728975 master-0 kubenswrapper[7604]: I0309 16:33:47.728940 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:48.728212 master-0 kubenswrapper[7604]: I0309 16:33:48.728112 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:48.728212 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:48.728212 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:48.728212 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:48.729149 master-0 kubenswrapper[7604]: I0309 16:33:48.728232 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:49.727686 master-0 kubenswrapper[7604]: I0309 16:33:49.727611 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:49.727686 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:49.727686 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:49.727686 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:49.728011 master-0 kubenswrapper[7604]: I0309 16:33:49.727708 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:50.729457 master-0 kubenswrapper[7604]: I0309 16:33:50.729363 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:50.729457 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:50.729457 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:50.729457 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:50.729457 master-0 kubenswrapper[7604]: I0309 16:33:50.729459 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:51.728411 master-0 kubenswrapper[7604]: I0309 16:33:51.728293 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:51.728411 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:51.728411 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:51.728411 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:51.728411 master-0 kubenswrapper[7604]: I0309 16:33:51.728396 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:52.728829 master-0 kubenswrapper[7604]: I0309 16:33:52.728743 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:52.728829 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:52.728829 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:52.728829 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:52.728829 master-0 kubenswrapper[7604]: I0309 16:33:52.728824 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:53.728341 master-0 kubenswrapper[7604]: I0309 16:33:53.728282 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:53.728341 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:53.728341 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:53.728341 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:53.728750 master-0 kubenswrapper[7604]: I0309 16:33:53.728346 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:54.727790 master-0 kubenswrapper[7604]: I0309 16:33:54.727741 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:54.727790 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:54.727790 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:54.727790 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:54.727790 master-0 kubenswrapper[7604]: I0309 16:33:54.727810 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:55.731546 master-0 kubenswrapper[7604]: I0309 16:33:55.731489 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:55.731546 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:55.731546 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:55.731546 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:55.732288 master-0 kubenswrapper[7604]: I0309 16:33:55.732254 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:55.805722 master-0 kubenswrapper[7604]: E0309 16:33:55.805618 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:33:55.807940 master-0 kubenswrapper[7604]: E0309 16:33:55.807857 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:33:55.809506 master-0 kubenswrapper[7604]: E0309 16:33:55.809417 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:33:55.809617 master-0 kubenswrapper[7604]: E0309 16:33:55.809536 7604 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" Mar 09 16:33:56.728746 master-0 kubenswrapper[7604]: I0309 16:33:56.728654 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:56.728746 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:56.728746 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:56.728746 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:56.729105 master-0 kubenswrapper[7604]: I0309 16:33:56.728772 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:57.728915 master-0 kubenswrapper[7604]: I0309 16:33:57.728851 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:57.728915 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:57.728915 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:57.728915 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:57.728915 master-0 kubenswrapper[7604]: I0309 16:33:57.728912 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:58.728198 master-0 kubenswrapper[7604]: I0309 16:33:58.728123 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:58.728198 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:58.728198 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:58.728198 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:58.728671 master-0 kubenswrapper[7604]: I0309 16:33:58.728223 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:33:59.728293 master-0 kubenswrapper[7604]: I0309 16:33:59.728161 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:33:59.728293 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:33:59.728293 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:33:59.728293 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:33:59.728942 master-0 kubenswrapper[7604]: I0309 16:33:59.728335 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:00.729788 master-0 kubenswrapper[7604]: I0309 16:34:00.729687 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:00.729788 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:00.729788 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:00.729788 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:00.730491 master-0 kubenswrapper[7604]: I0309 16:34:00.729837 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:01.729351 master-0 kubenswrapper[7604]: I0309 16:34:01.729266 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:01.729351 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:01.729351 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:01.729351 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:01.729774 master-0 kubenswrapper[7604]: I0309 16:34:01.729379 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:02.728384 master-0 kubenswrapper[7604]: I0309 16:34:02.728282 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:02.728384 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:02.728384 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:02.728384 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:02.729002 master-0 kubenswrapper[7604]: I0309 16:34:02.728410 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:03.727974 master-0 kubenswrapper[7604]: I0309 16:34:03.727902 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:03.727974 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:03.727974 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:03.727974 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:03.727974 master-0 kubenswrapper[7604]: I0309 16:34:03.727971 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:04.729315 master-0 kubenswrapper[7604]: I0309 16:34:04.729214 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:04.729315 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:04.729315 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:04.729315 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:04.729315 master-0 kubenswrapper[7604]: I0309 16:34:04.729310 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:05.728351 master-0 kubenswrapper[7604]: I0309 16:34:05.728276 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:05.728351 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:05.728351 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:05.728351 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:05.728710 master-0 kubenswrapper[7604]: I0309 16:34:05.728377 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:05.806716 master-0 kubenswrapper[7604]: E0309 16:34:05.806589 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:34:05.808933 master-0 kubenswrapper[7604]: E0309 16:34:05.808880 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:34:05.811511 master-0 kubenswrapper[7604]: E0309 16:34:05.811340 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:34:05.811511 master-0 kubenswrapper[7604]: E0309 16:34:05.811491 7604 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" Mar 09 16:34:06.728850 master-0 kubenswrapper[7604]: I0309 16:34:06.728753 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:06.728850 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:06.728850 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:06.728850 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:06.729415 master-0 kubenswrapper[7604]: I0309 16:34:06.728876 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:07.728609 master-0 kubenswrapper[7604]: I0309 16:34:07.728531 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:07.728609 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:07.728609 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:07.728609 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:07.729290 master-0 kubenswrapper[7604]: I0309 16:34:07.728634 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:08.625257 master-0 kubenswrapper[7604]: I0309 16:34:08.625177 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 09 16:34:08.626269 master-0 kubenswrapper[7604]: I0309 16:34:08.626215 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.629484 master-0 kubenswrapper[7604]: I0309 16:34:08.629089 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-qmw78" Mar 09 16:34:08.629484 master-0 kubenswrapper[7604]: I0309 16:34:08.629272 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 09 16:34:08.631858 master-0 kubenswrapper[7604]: I0309 16:34:08.631789 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-var-lock\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.631947 master-0 kubenswrapper[7604]: I0309 16:34:08.631918 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797303d2-6d31-42f6-a1a4-c894509fba00-kube-api-access\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.632118 master-0 kubenswrapper[7604]: I0309 16:34:08.632090 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.680148 master-0 kubenswrapper[7604]: I0309 16:34:08.680056 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 09 16:34:08.728745 master-0 kubenswrapper[7604]: I0309 16:34:08.728646 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:08.728745 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:08.728745 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:08.728745 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:08.728745 master-0 kubenswrapper[7604]: I0309 16:34:08.728740 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:08.734154 master-0 kubenswrapper[7604]: I0309 16:34:08.734105 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-var-lock\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.734345 master-0 kubenswrapper[7604]: I0309 16:34:08.734234 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-var-lock\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.734345 master-0 kubenswrapper[7604]: I0309 16:34:08.734263 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797303d2-6d31-42f6-a1a4-c894509fba00-kube-api-access\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.734498 master-0 kubenswrapper[7604]: I0309 16:34:08.734466 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.734604 master-0 kubenswrapper[7604]: I0309 16:34:08.734580 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.752553 master-0 kubenswrapper[7604]: I0309 16:34:08.752476 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797303d2-6d31-42f6-a1a4-c894509fba00-kube-api-access\") pod \"installer-2-master-0\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:08.946656 master-0 kubenswrapper[7604]: I0309 16:34:08.946553 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:09.466647 master-0 kubenswrapper[7604]: I0309 16:34:09.466390 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 09 16:34:09.612643 master-0 kubenswrapper[7604]: I0309 16:34:09.612587 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rrgmr_ae16ddba-7385-4bde-8b7e-5be9f8106890/kube-multus-additional-cni-plugins/0.log" Mar 09 16:34:09.613288 master-0 kubenswrapper[7604]: I0309 16:34:09.612766 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:34:09.635391 master-0 kubenswrapper[7604]: E0309 16:34:09.635203 7604 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd3c489_427c_4a47_b7b9_5d1611b9be12.slice/crio-2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd3c489_427c_4a47_b7b9_5d1611b9be12.slice/crio-conmon-2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae16ddba_7385_4bde_8b7e_5be9f8106890.slice/crio-2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae16ddba_7385_4bde_8b7e_5be9f8106890.slice/crio-conmon-2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776.scope\": RecentStats: unable to find data in memory cache]" Mar 09 16:34:09.635771 master-0 kubenswrapper[7604]: E0309 16:34:09.635395 7604 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd3c489_427c_4a47_b7b9_5d1611b9be12.slice/crio-2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd3c489_427c_4a47_b7b9_5d1611b9be12.slice/crio-conmon-2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae16ddba_7385_4bde_8b7e_5be9f8106890.slice/crio-conmon-2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae16ddba_7385_4bde_8b7e_5be9f8106890.slice/crio-2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776.scope\": RecentStats: unable to find data in memory cache]" Mar 09 16:34:09.728051 master-0 kubenswrapper[7604]: I0309 16:34:09.727613 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"797303d2-6d31-42f6-a1a4-c894509fba00","Type":"ContainerStarted","Data":"3299202c28b8acf777efcf9fdf25fde3d2b0c3f7effed599dce85a012e3a3b40"} Mar 09 16:34:09.728567 master-0 kubenswrapper[7604]: I0309 16:34:09.728482 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:09.728567 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:09.728567 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:09.728567 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:09.728567 master-0 kubenswrapper[7604]: I0309 16:34:09.728543 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:09.730890 master-0 kubenswrapper[7604]: I0309 16:34:09.730847 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rrgmr_ae16ddba-7385-4bde-8b7e-5be9f8106890/kube-multus-additional-cni-plugins/0.log" Mar 09 16:34:09.731351 master-0 kubenswrapper[7604]: I0309 16:34:09.730900 7604 generic.go:334] "Generic (PLEG): container finished" podID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" exitCode=137 Mar 09 16:34:09.731351 master-0 kubenswrapper[7604]: I0309 16:34:09.730928 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" event={"ID":"ae16ddba-7385-4bde-8b7e-5be9f8106890","Type":"ContainerDied","Data":"2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776"} Mar 09 16:34:09.731351 master-0 kubenswrapper[7604]: I0309 16:34:09.730952 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" event={"ID":"ae16ddba-7385-4bde-8b7e-5be9f8106890","Type":"ContainerDied","Data":"d7038c8b74d085eeb74dcf495b5b2f83248e510519cd54fbf051d1dea5e66f5f"} Mar 09 16:34:09.731351 master-0 kubenswrapper[7604]: I0309 16:34:09.730973 7604 scope.go:117] "RemoveContainer" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" Mar 09 16:34:09.731351 master-0 kubenswrapper[7604]: I0309 16:34:09.731263 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rrgmr" Mar 09 16:34:09.760556 master-0 kubenswrapper[7604]: I0309 16:34:09.760309 7604 scope.go:117] "RemoveContainer" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" Mar 09 16:34:09.760718 master-0 kubenswrapper[7604]: I0309 16:34:09.760563 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ae16ddba-7385-4bde-8b7e-5be9f8106890-ready\") pod \"ae16ddba-7385-4bde-8b7e-5be9f8106890\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " Mar 09 16:34:09.760806 master-0 kubenswrapper[7604]: I0309 16:34:09.760785 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae16ddba-7385-4bde-8b7e-5be9f8106890-tuning-conf-dir\") pod \"ae16ddba-7385-4bde-8b7e-5be9f8106890\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " Mar 09 16:34:09.760884 master-0 kubenswrapper[7604]: I0309 16:34:09.760838 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae16ddba-7385-4bde-8b7e-5be9f8106890-cni-sysctl-allowlist\") pod \"ae16ddba-7385-4bde-8b7e-5be9f8106890\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " Mar 09 16:34:09.761093 master-0 kubenswrapper[7604]: I0309 16:34:09.760963 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57vgl\" (UniqueName: \"kubernetes.io/projected/ae16ddba-7385-4bde-8b7e-5be9f8106890-kube-api-access-57vgl\") pod \"ae16ddba-7385-4bde-8b7e-5be9f8106890\" (UID: \"ae16ddba-7385-4bde-8b7e-5be9f8106890\") " Mar 09 16:34:09.761188 master-0 kubenswrapper[7604]: E0309 16:34:09.761111 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776\": container with ID starting with 2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776 not found: ID does not exist" containerID="2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776" Mar 09 16:34:09.761188 master-0 kubenswrapper[7604]: I0309 16:34:09.761147 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776"} err="failed to get container status \"2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776\": rpc error: code = NotFound desc = could not find container \"2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776\": container with ID starting with 2f1352be377e50446c622f757b63c529b6e77c78a0ca7e049d68470925f3e776 not found: ID does not exist" Mar 09 16:34:09.761380 master-0 kubenswrapper[7604]: I0309 16:34:09.761259 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae16ddba-7385-4bde-8b7e-5be9f8106890-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "ae16ddba-7385-4bde-8b7e-5be9f8106890" (UID: "ae16ddba-7385-4bde-8b7e-5be9f8106890"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:34:09.761473 master-0 kubenswrapper[7604]: I0309 16:34:09.761395 7604 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ae16ddba-7385-4bde-8b7e-5be9f8106890-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:09.761891 master-0 kubenswrapper[7604]: I0309 16:34:09.761822 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae16ddba-7385-4bde-8b7e-5be9f8106890-ready" (OuterVolumeSpecName: "ready") pod "ae16ddba-7385-4bde-8b7e-5be9f8106890" (UID: "ae16ddba-7385-4bde-8b7e-5be9f8106890"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:34:09.763271 master-0 kubenswrapper[7604]: I0309 16:34:09.763158 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae16ddba-7385-4bde-8b7e-5be9f8106890-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "ae16ddba-7385-4bde-8b7e-5be9f8106890" (UID: "ae16ddba-7385-4bde-8b7e-5be9f8106890"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:34:09.766355 master-0 kubenswrapper[7604]: I0309 16:34:09.766284 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae16ddba-7385-4bde-8b7e-5be9f8106890-kube-api-access-57vgl" (OuterVolumeSpecName: "kube-api-access-57vgl") pod "ae16ddba-7385-4bde-8b7e-5be9f8106890" (UID: "ae16ddba-7385-4bde-8b7e-5be9f8106890"). InnerVolumeSpecName "kube-api-access-57vgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:34:09.864528 master-0 kubenswrapper[7604]: I0309 16:34:09.864460 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57vgl\" (UniqueName: \"kubernetes.io/projected/ae16ddba-7385-4bde-8b7e-5be9f8106890-kube-api-access-57vgl\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:09.864528 master-0 kubenswrapper[7604]: I0309 16:34:09.864534 7604 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ae16ddba-7385-4bde-8b7e-5be9f8106890-ready\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:09.864964 master-0 kubenswrapper[7604]: I0309 16:34:09.864553 7604 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ae16ddba-7385-4bde-8b7e-5be9f8106890-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:10.084887 master-0 kubenswrapper[7604]: I0309 16:34:10.084795 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rrgmr"] Mar 09 16:34:10.090240 master-0 kubenswrapper[7604]: I0309 16:34:10.090145 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rrgmr"] Mar 09 16:34:10.728111 master-0 kubenswrapper[7604]: I0309 16:34:10.728004 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:10.728111 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:10.728111 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:10.728111 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:10.728111 master-0 kubenswrapper[7604]: I0309 16:34:10.728101 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:10.740916 master-0 kubenswrapper[7604]: I0309 16:34:10.740819 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"797303d2-6d31-42f6-a1a4-c894509fba00","Type":"ContainerStarted","Data":"0bd8a00ef7113d3a7bd5dd2884b67a8d73e4a8ff56a6f8e02309ba516f2a9770"} Mar 09 16:34:11.120835 master-0 kubenswrapper[7604]: I0309 16:34:11.120676 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" path="/var/lib/kubelet/pods/ae16ddba-7385-4bde-8b7e-5be9f8106890/volumes" Mar 09 16:34:11.729182 master-0 kubenswrapper[7604]: I0309 16:34:11.729066 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:11.729182 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:11.729182 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:11.729182 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:11.729182 master-0 kubenswrapper[7604]: I0309 16:34:11.729185 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:12.729041 master-0 kubenswrapper[7604]: I0309 16:34:12.728923 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:12.729041 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:12.729041 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:12.729041 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:12.729733 master-0 kubenswrapper[7604]: I0309 16:34:12.729069 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:13.728130 master-0 kubenswrapper[7604]: I0309 16:34:13.728068 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:13.728130 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:13.728130 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:13.728130 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:13.728447 master-0 kubenswrapper[7604]: I0309 16:34:13.728133 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:14.728852 master-0 kubenswrapper[7604]: I0309 16:34:14.728708 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:34:14.728852 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:34:14.728852 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:34:14.728852 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:34:14.728852 master-0 kubenswrapper[7604]: I0309 16:34:14.728851 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:34:14.729833 master-0 kubenswrapper[7604]: I0309 16:34:14.728948 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:34:14.730046 master-0 kubenswrapper[7604]: I0309 16:34:14.729986 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"116e02ef02114f2030248577cde62b42e1c5eea50c09ca56d92d93834a526424"} pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" containerMessage="Container router failed startup probe, will be restarted" Mar 09 16:34:14.730123 master-0 kubenswrapper[7604]: I0309 16:34:14.730049 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" containerID="cri-o://116e02ef02114f2030248577cde62b42e1c5eea50c09ca56d92d93834a526424" gracePeriod=3600 Mar 09 16:34:16.674677 master-0 kubenswrapper[7604]: I0309 16:34:16.673650 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=8.673620003 podStartE2EDuration="8.673620003s" podCreationTimestamp="2026-03-09 16:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:34:10.805120567 +0000 UTC m=+507.859090000" watchObservedRunningTime="2026-03-09 16:34:16.673620003 +0000 UTC m=+513.727589426" Mar 09 16:34:16.674677 master-0 kubenswrapper[7604]: I0309 16:34:16.674634 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 09 16:34:16.675659 master-0 kubenswrapper[7604]: E0309 16:34:16.675012 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" Mar 09 16:34:16.675659 master-0 kubenswrapper[7604]: I0309 16:34:16.675030 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" Mar 09 16:34:16.675659 master-0 kubenswrapper[7604]: I0309 16:34:16.675208 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae16ddba-7385-4bde-8b7e-5be9f8106890" containerName="kube-multus-additional-cni-plugins" Mar 09 16:34:16.675938 master-0 kubenswrapper[7604]: I0309 16:34:16.675869 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.678622 master-0 kubenswrapper[7604]: I0309 16:34:16.678543 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cshl6" Mar 09 16:34:16.679691 master-0 kubenswrapper[7604]: I0309 16:34:16.679651 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 09 16:34:16.692090 master-0 kubenswrapper[7604]: I0309 16:34:16.692003 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 09 16:34:16.772454 master-0 kubenswrapper[7604]: I0309 16:34:16.772173 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.772454 master-0 kubenswrapper[7604]: I0309 16:34:16.772260 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-var-lock\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.772967 master-0 kubenswrapper[7604]: I0309 16:34:16.772785 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f44499-c673-4c73-8ee9-dcef8914ce14-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.796775 master-0 kubenswrapper[7604]: I0309 16:34:16.796709 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-g8n5t_4bd3c489-427c-4a47-b7b9-5d1611b9be12/multus-admission-controller/0.log" Mar 09 16:34:16.796775 master-0 kubenswrapper[7604]: I0309 16:34:16.796768 7604 generic.go:334] "Generic (PLEG): container finished" podID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerID="936e54f2dcd8b97ec29ef8044719dc7e3e661dccc2b4396664320d24598d2652" exitCode=137 Mar 09 16:34:16.797173 master-0 kubenswrapper[7604]: I0309 16:34:16.796834 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" event={"ID":"4bd3c489-427c-4a47-b7b9-5d1611b9be12","Type":"ContainerDied","Data":"936e54f2dcd8b97ec29ef8044719dc7e3e661dccc2b4396664320d24598d2652"} Mar 09 16:34:16.875612 master-0 kubenswrapper[7604]: I0309 16:34:16.875329 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.876214 master-0 kubenswrapper[7604]: I0309 16:34:16.875575 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.876214 master-0 kubenswrapper[7604]: I0309 16:34:16.875740 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-var-lock\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.876214 master-0 kubenswrapper[7604]: I0309 16:34:16.875842 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-var-lock\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.876214 master-0 kubenswrapper[7604]: I0309 16:34:16.876159 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f44499-c673-4c73-8ee9-dcef8914ce14-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:16.896111 master-0 kubenswrapper[7604]: I0309 16:34:16.895970 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f44499-c673-4c73-8ee9-dcef8914ce14-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:17.049886 master-0 kubenswrapper[7604]: I0309 16:34:17.049784 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:34:17.469914 master-0 kubenswrapper[7604]: I0309 16:34:17.469849 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-g8n5t_4bd3c489-427c-4a47-b7b9-5d1611b9be12/multus-admission-controller/0.log" Mar 09 16:34:17.470260 master-0 kubenswrapper[7604]: I0309 16:34:17.470009 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:34:17.589329 master-0 kubenswrapper[7604]: I0309 16:34:17.589232 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") pod \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " Mar 09 16:34:17.589754 master-0 kubenswrapper[7604]: I0309 16:34:17.589719 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") pod \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\" (UID: \"4bd3c489-427c-4a47-b7b9-5d1611b9be12\") " Mar 09 16:34:17.593540 master-0 kubenswrapper[7604]: I0309 16:34:17.593454 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "4bd3c489-427c-4a47-b7b9-5d1611b9be12" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:34:17.594622 master-0 kubenswrapper[7604]: I0309 16:34:17.594088 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl" (OuterVolumeSpecName: "kube-api-access-gc9jl") pod "4bd3c489-427c-4a47-b7b9-5d1611b9be12" (UID: "4bd3c489-427c-4a47-b7b9-5d1611b9be12"). InnerVolumeSpecName "kube-api-access-gc9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:34:17.611572 master-0 kubenswrapper[7604]: I0309 16:34:17.611488 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 09 16:34:17.616971 master-0 kubenswrapper[7604]: W0309 16:34:17.616904 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf4f44499_c673_4c73_8ee9_dcef8914ce14.slice/crio-d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a WatchSource:0}: Error finding container d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a: Status 404 returned error can't find the container with id d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a Mar 09 16:34:17.694505 master-0 kubenswrapper[7604]: I0309 16:34:17.692381 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc9jl\" (UniqueName: \"kubernetes.io/projected/4bd3c489-427c-4a47-b7b9-5d1611b9be12-kube-api-access-gc9jl\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:17.694505 master-0 kubenswrapper[7604]: I0309 16:34:17.692568 7604 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4bd3c489-427c-4a47-b7b9-5d1611b9be12-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:17.808062 master-0 kubenswrapper[7604]: I0309 16:34:17.807983 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"f4f44499-c673-4c73-8ee9-dcef8914ce14","Type":"ContainerStarted","Data":"d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a"} Mar 09 16:34:17.813681 master-0 kubenswrapper[7604]: I0309 16:34:17.813635 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-g8n5t_4bd3c489-427c-4a47-b7b9-5d1611b9be12/multus-admission-controller/0.log" Mar 09 16:34:17.813842 master-0 kubenswrapper[7604]: I0309 16:34:17.813733 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" event={"ID":"4bd3c489-427c-4a47-b7b9-5d1611b9be12","Type":"ContainerDied","Data":"565d53795593613b69876bde417beb025da29d5e3368eb375d8d27d674214719"} Mar 09 16:34:17.813842 master-0 kubenswrapper[7604]: I0309 16:34:17.813797 7604 scope.go:117] "RemoveContainer" containerID="2dbce61b6bd988e12f343e5566fe1a52a3a65fb58e742d2db6fef1e31072c6b0" Mar 09 16:34:17.813913 master-0 kubenswrapper[7604]: I0309 16:34:17.813860 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-g8n5t" Mar 09 16:34:17.833084 master-0 kubenswrapper[7604]: I0309 16:34:17.833042 7604 scope.go:117] "RemoveContainer" containerID="936e54f2dcd8b97ec29ef8044719dc7e3e661dccc2b4396664320d24598d2652" Mar 09 16:34:17.861360 master-0 kubenswrapper[7604]: I0309 16:34:17.861285 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-g8n5t"] Mar 09 16:34:17.875177 master-0 kubenswrapper[7604]: I0309 16:34:17.875075 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-g8n5t"] Mar 09 16:34:18.826081 master-0 kubenswrapper[7604]: I0309 16:34:18.825953 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"f4f44499-c673-4c73-8ee9-dcef8914ce14","Type":"ContainerStarted","Data":"e31e101fae28ad5c7e22332114d10cb8955a646e181d2af78e8c1a0573c9de55"} Mar 09 16:34:18.869053 master-0 kubenswrapper[7604]: I0309 16:34:18.868918 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.868884333 podStartE2EDuration="2.868884333s" podCreationTimestamp="2026-03-09 16:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:34:18.864769004 +0000 UTC m=+515.918738447" watchObservedRunningTime="2026-03-09 16:34:18.868884333 +0000 UTC m=+515.922853766" Mar 09 16:34:19.121345 master-0 kubenswrapper[7604]: I0309 16:34:19.121161 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" path="/var/lib/kubelet/pods/4bd3c489-427c-4a47-b7b9-5d1611b9be12/volumes" Mar 09 16:34:28.319373 master-0 kubenswrapper[7604]: I0309 16:34:28.319285 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 09 16:34:28.320556 master-0 kubenswrapper[7604]: E0309 16:34:28.319765 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="kube-rbac-proxy" Mar 09 16:34:28.320556 master-0 kubenswrapper[7604]: I0309 16:34:28.319788 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="kube-rbac-proxy" Mar 09 16:34:28.320556 master-0 kubenswrapper[7604]: E0309 16:34:28.319810 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="multus-admission-controller" Mar 09 16:34:28.320556 master-0 kubenswrapper[7604]: I0309 16:34:28.319821 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="multus-admission-controller" Mar 09 16:34:28.320556 master-0 kubenswrapper[7604]: I0309 16:34:28.320011 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="kube-rbac-proxy" Mar 09 16:34:28.320556 master-0 kubenswrapper[7604]: I0309 16:34:28.320042 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd3c489-427c-4a47-b7b9-5d1611b9be12" containerName="multus-admission-controller" Mar 09 16:34:28.320951 master-0 kubenswrapper[7604]: I0309 16:34:28.320777 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.325973 master-0 kubenswrapper[7604]: I0309 16:34:28.325902 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 16:34:28.326257 master-0 kubenswrapper[7604]: I0309 16:34:28.326056 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-cd6zf" Mar 09 16:34:28.343612 master-0 kubenswrapper[7604]: I0309 16:34:28.343555 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 09 16:34:28.384007 master-0 kubenswrapper[7604]: I0309 16:34:28.383904 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.384370 master-0 kubenswrapper[7604]: I0309 16:34:28.384042 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d4d5a2-1544-4443-acc5-d7eee167a29c-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.384370 master-0 kubenswrapper[7604]: I0309 16:34:28.384097 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.485275 master-0 kubenswrapper[7604]: I0309 16:34:28.485190 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.485275 master-0 kubenswrapper[7604]: I0309 16:34:28.485287 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d4d5a2-1544-4443-acc5-d7eee167a29c-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.486090 master-0 kubenswrapper[7604]: I0309 16:34:28.485467 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.486090 master-0 kubenswrapper[7604]: I0309 16:34:28.485526 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.486090 master-0 kubenswrapper[7604]: I0309 16:34:28.485482 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.504645 master-0 kubenswrapper[7604]: I0309 16:34:28.504602 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d4d5a2-1544-4443-acc5-d7eee167a29c-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:28.649397 master-0 kubenswrapper[7604]: I0309 16:34:28.649187 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:34:29.069231 master-0 kubenswrapper[7604]: I0309 16:34:29.068748 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 09 16:34:29.069739 master-0 kubenswrapper[7604]: W0309 16:34:29.069672 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84d4d5a2_1544_4443_acc5_d7eee167a29c.slice/crio-1efa3cc9328f0d084f5caf6acc39c884bf0742e3907b9693683e98c9f90f46cb WatchSource:0}: Error finding container 1efa3cc9328f0d084f5caf6acc39c884bf0742e3907b9693683e98c9f90f46cb: Status 404 returned error can't find the container with id 1efa3cc9328f0d084f5caf6acc39c884bf0742e3907b9693683e98c9f90f46cb Mar 09 16:34:29.912477 master-0 kubenswrapper[7604]: I0309 16:34:29.912335 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"84d4d5a2-1544-4443-acc5-d7eee167a29c","Type":"ContainerStarted","Data":"beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be"} Mar 09 16:34:29.912477 master-0 kubenswrapper[7604]: I0309 16:34:29.912401 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"84d4d5a2-1544-4443-acc5-d7eee167a29c","Type":"ContainerStarted","Data":"1efa3cc9328f0d084f5caf6acc39c884bf0742e3907b9693683e98c9f90f46cb"} Mar 09 16:34:29.939673 master-0 kubenswrapper[7604]: I0309 16:34:29.937389 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.937362883 podStartE2EDuration="1.937362883s" podCreationTimestamp="2026-03-09 16:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:34:29.937060425 +0000 UTC m=+526.991029868" watchObservedRunningTime="2026-03-09 16:34:29.937362883 +0000 UTC m=+526.991332306" Mar 09 16:34:34.726627 master-0 kubenswrapper[7604]: I0309 16:34:34.726504 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 09 16:34:34.727598 master-0 kubenswrapper[7604]: I0309 16:34:34.726922 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="84d4d5a2-1544-4443-acc5-d7eee167a29c" containerName="installer" containerID="cri-o://beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be" gracePeriod=30 Mar 09 16:34:39.315294 master-0 kubenswrapper[7604]: I0309 16:34:39.315197 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 09 16:34:39.316624 master-0 kubenswrapper[7604]: I0309 16:34:39.316555 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.331649 master-0 kubenswrapper[7604]: I0309 16:34:39.331547 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 09 16:34:39.466368 master-0 kubenswrapper[7604]: I0309 16:34:39.466262 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.466673 master-0 kubenswrapper[7604]: I0309 16:34:39.466367 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-var-lock\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.466673 master-0 kubenswrapper[7604]: I0309 16:34:39.466582 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.568516 master-0 kubenswrapper[7604]: I0309 16:34:39.568366 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.568516 master-0 kubenswrapper[7604]: I0309 16:34:39.568458 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-var-lock\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.568516 master-0 kubenswrapper[7604]: I0309 16:34:39.568488 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.568870 master-0 kubenswrapper[7604]: I0309 16:34:39.568576 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.568870 master-0 kubenswrapper[7604]: I0309 16:34:39.568696 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-var-lock\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.587853 master-0 kubenswrapper[7604]: I0309 16:34:39.587768 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:39.653923 master-0 kubenswrapper[7604]: I0309 16:34:39.653831 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:34:40.136316 master-0 kubenswrapper[7604]: I0309 16:34:40.136163 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 09 16:34:40.144570 master-0 kubenswrapper[7604]: W0309 16:34:40.144443 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3a8a48b1_d4a9_48fb_912e_2f793a6d8478.slice/crio-ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15 WatchSource:0}: Error finding container ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15: Status 404 returned error can't find the container with id ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15 Mar 09 16:34:40.842498 master-0 kubenswrapper[7604]: I0309 16:34:40.842281 7604 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:34:40.843193 master-0 kubenswrapper[7604]: I0309 16:34:40.842878 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://deb49cc582b4f05da3e439b71cfab3c7b565bd681dbf4fabe99e76944648f931" gracePeriod=30 Mar 09 16:34:40.843193 master-0 kubenswrapper[7604]: I0309 16:34:40.842912 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://1ef790e4963197709ca73ccb0ef459f616a446f12d2312254e29118d5fbf4647" gracePeriod=30 Mar 09 16:34:40.843193 master-0 kubenswrapper[7604]: I0309 16:34:40.843003 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://9246a82d36d6d839dd216afb960c961d28bf9631aa040ddcbe7751de007686ca" gracePeriod=30 Mar 09 16:34:40.843193 master-0 kubenswrapper[7604]: I0309 16:34:40.843016 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://019a53aacd83e37d8e9ec3c064556104c3d28abe8d9353b3fe0029fa09706cde" gracePeriod=30 Mar 09 16:34:40.843193 master-0 kubenswrapper[7604]: I0309 16:34:40.842905 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://cdee4fd47317482d2314470b8d7e76453519a7ffb89e09ee80444b9e7dc9b818" gracePeriod=30 Mar 09 16:34:40.845588 master-0 kubenswrapper[7604]: I0309 16:34:40.845534 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:34:40.846083 master-0 kubenswrapper[7604]: E0309 16:34:40.846045 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 09 16:34:40.846083 master-0 kubenswrapper[7604]: I0309 16:34:40.846071 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 09 16:34:40.846191 master-0 kubenswrapper[7604]: E0309 16:34:40.846121 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 09 16:34:40.846191 master-0 kubenswrapper[7604]: I0309 16:34:40.846131 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 09 16:34:40.846191 master-0 kubenswrapper[7604]: E0309 16:34:40.846143 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 09 16:34:40.846191 master-0 kubenswrapper[7604]: I0309 16:34:40.846154 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 09 16:34:40.846191 master-0 kubenswrapper[7604]: E0309 16:34:40.846174 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 09 16:34:40.846191 master-0 kubenswrapper[7604]: I0309 16:34:40.846184 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: E0309 16:34:40.846194 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846203 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: E0309 16:34:40.846214 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846225 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: E0309 16:34:40.846240 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846249 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: E0309 16:34:40.846264 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846272 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846518 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846547 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846570 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846582 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 09 16:34:40.846971 master-0 kubenswrapper[7604]: I0309 16:34:40.846596 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 09 16:34:40.994067 master-0 kubenswrapper[7604]: I0309 16:34:40.993984 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:40.994067 master-0 kubenswrapper[7604]: I0309 16:34:40.994076 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:40.994502 master-0 kubenswrapper[7604]: I0309 16:34:40.994145 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:40.994502 master-0 kubenswrapper[7604]: I0309 16:34:40.994181 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:40.994502 master-0 kubenswrapper[7604]: I0309 16:34:40.994230 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:40.994502 master-0 kubenswrapper[7604]: I0309 16:34:40.994247 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.004905 master-0 kubenswrapper[7604]: I0309 16:34:41.004795 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3a8a48b1-d4a9-48fb-912e-2f793a6d8478","Type":"ContainerStarted","Data":"1f3ede07b96bf06c243e7982afd5fe4a072e8a3d04eb6bffe1b7a50cca581cf9"} Mar 09 16:34:41.004905 master-0 kubenswrapper[7604]: I0309 16:34:41.004883 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3a8a48b1-d4a9-48fb-912e-2f793a6d8478","Type":"ContainerStarted","Data":"ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15"} Mar 09 16:34:41.008076 master-0 kubenswrapper[7604]: I0309 16:34:41.008029 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 09 16:34:41.010006 master-0 kubenswrapper[7604]: I0309 16:34:41.009953 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 09 16:34:41.012945 master-0 kubenswrapper[7604]: I0309 16:34:41.012861 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="cdee4fd47317482d2314470b8d7e76453519a7ffb89e09ee80444b9e7dc9b818" exitCode=2 Mar 09 16:34:41.012945 master-0 kubenswrapper[7604]: I0309 16:34:41.012942 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="019a53aacd83e37d8e9ec3c064556104c3d28abe8d9353b3fe0029fa09706cde" exitCode=0 Mar 09 16:34:41.013053 master-0 kubenswrapper[7604]: I0309 16:34:41.012956 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="1ef790e4963197709ca73ccb0ef459f616a446f12d2312254e29118d5fbf4647" exitCode=2 Mar 09 16:34:41.096172 master-0 kubenswrapper[7604]: I0309 16:34:41.095964 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096172 master-0 kubenswrapper[7604]: I0309 16:34:41.096133 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096171 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096226 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096337 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096371 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096415 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096467 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096570 master-0 kubenswrapper[7604]: I0309 16:34:41.096532 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096868 master-0 kubenswrapper[7604]: I0309 16:34:41.096609 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096868 master-0 kubenswrapper[7604]: I0309 16:34:41.096621 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:41.096868 master-0 kubenswrapper[7604]: I0309 16:34:41.096650 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:34:51.338003 master-0 kubenswrapper[7604]: E0309 16:34:51.337860 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:34:55.125560 master-0 kubenswrapper[7604]: I0309 16:34:55.125455 7604 generic.go:334] "Generic (PLEG): container finished" podID="797303d2-6d31-42f6-a1a4-c894509fba00" containerID="0bd8a00ef7113d3a7bd5dd2884b67a8d73e4a8ff56a6f8e02309ba516f2a9770" exitCode=0 Mar 09 16:34:55.125560 master-0 kubenswrapper[7604]: I0309 16:34:55.125532 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"797303d2-6d31-42f6-a1a4-c894509fba00","Type":"ContainerDied","Data":"0bd8a00ef7113d3a7bd5dd2884b67a8d73e4a8ff56a6f8e02309ba516f2a9770"} Mar 09 16:34:56.136537 master-0 kubenswrapper[7604]: I0309 16:34:56.136470 7604 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239" exitCode=1 Mar 09 16:34:56.137398 master-0 kubenswrapper[7604]: I0309 16:34:56.136562 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239"} Mar 09 16:34:56.137398 master-0 kubenswrapper[7604]: I0309 16:34:56.136606 7604 scope.go:117] "RemoveContainer" containerID="e8fcbf086ed08a14966a423a93930e67c1cbd9793017fcea8581f23478898eea" Mar 09 16:34:56.137704 master-0 kubenswrapper[7604]: I0309 16:34:56.137640 7604 scope.go:117] "RemoveContainer" containerID="e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239" Mar 09 16:34:56.138055 master-0 kubenswrapper[7604]: E0309 16:34:56.137998 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" Mar 09 16:34:56.140680 master-0 kubenswrapper[7604]: I0309 16:34:56.140625 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" exitCode=1 Mar 09 16:34:56.140998 master-0 kubenswrapper[7604]: I0309 16:34:56.140685 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6"} Mar 09 16:34:56.141668 master-0 kubenswrapper[7604]: I0309 16:34:56.141285 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:34:56.142107 master-0 kubenswrapper[7604]: E0309 16:34:56.141701 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:34:56.158643 master-0 kubenswrapper[7604]: I0309 16:34:56.158556 7604 scope.go:117] "RemoveContainer" containerID="8ec03662bfb689a4764f7edbb538732c79e6e42855becb27b7223236cfbfeaa7" Mar 09 16:34:56.433413 master-0 kubenswrapper[7604]: I0309 16:34:56.433349 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:56.461827 master-0 kubenswrapper[7604]: I0309 16:34:56.461719 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797303d2-6d31-42f6-a1a4-c894509fba00-kube-api-access\") pod \"797303d2-6d31-42f6-a1a4-c894509fba00\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " Mar 09 16:34:56.461827 master-0 kubenswrapper[7604]: I0309 16:34:56.461832 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-kubelet-dir\") pod \"797303d2-6d31-42f6-a1a4-c894509fba00\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " Mar 09 16:34:56.462215 master-0 kubenswrapper[7604]: I0309 16:34:56.461864 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-var-lock\") pod \"797303d2-6d31-42f6-a1a4-c894509fba00\" (UID: \"797303d2-6d31-42f6-a1a4-c894509fba00\") " Mar 09 16:34:56.462215 master-0 kubenswrapper[7604]: I0309 16:34:56.462049 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "797303d2-6d31-42f6-a1a4-c894509fba00" (UID: "797303d2-6d31-42f6-a1a4-c894509fba00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:34:56.462215 master-0 kubenswrapper[7604]: I0309 16:34:56.462159 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-var-lock" (OuterVolumeSpecName: "var-lock") pod "797303d2-6d31-42f6-a1a4-c894509fba00" (UID: "797303d2-6d31-42f6-a1a4-c894509fba00"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:34:56.465832 master-0 kubenswrapper[7604]: I0309 16:34:56.465762 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797303d2-6d31-42f6-a1a4-c894509fba00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "797303d2-6d31-42f6-a1a4-c894509fba00" (UID: "797303d2-6d31-42f6-a1a4-c894509fba00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:34:56.563978 master-0 kubenswrapper[7604]: I0309 16:34:56.563882 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/797303d2-6d31-42f6-a1a4-c894509fba00-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:56.563978 master-0 kubenswrapper[7604]: I0309 16:34:56.563927 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:56.563978 master-0 kubenswrapper[7604]: I0309 16:34:56.563944 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/797303d2-6d31-42f6-a1a4-c894509fba00-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:34:57.155510 master-0 kubenswrapper[7604]: I0309 16:34:57.155453 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"797303d2-6d31-42f6-a1a4-c894509fba00","Type":"ContainerDied","Data":"3299202c28b8acf777efcf9fdf25fde3d2b0c3f7effed599dce85a012e3a3b40"} Mar 09 16:34:57.156076 master-0 kubenswrapper[7604]: I0309 16:34:57.156059 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3299202c28b8acf777efcf9fdf25fde3d2b0c3f7effed599dce85a012e3a3b40" Mar 09 16:34:57.156150 master-0 kubenswrapper[7604]: I0309 16:34:57.155580 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 09 16:34:57.180660 master-0 kubenswrapper[7604]: I0309 16:34:57.180592 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:34:57.182006 master-0 kubenswrapper[7604]: I0309 16:34:57.181989 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:34:57.182554 master-0 kubenswrapper[7604]: E0309 16:34:57.182529 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:35:00.630892 master-0 kubenswrapper[7604]: I0309 16:35:00.630811 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_84d4d5a2-1544-4443-acc5-d7eee167a29c/installer/0.log" Mar 09 16:35:00.631597 master-0 kubenswrapper[7604]: I0309 16:35:00.630949 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:35:00.730117 master-0 kubenswrapper[7604]: I0309 16:35:00.730021 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d4d5a2-1544-4443-acc5-d7eee167a29c-kube-api-access\") pod \"84d4d5a2-1544-4443-acc5-d7eee167a29c\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " Mar 09 16:35:00.730117 master-0 kubenswrapper[7604]: I0309 16:35:00.730098 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-kubelet-dir\") pod \"84d4d5a2-1544-4443-acc5-d7eee167a29c\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " Mar 09 16:35:00.730573 master-0 kubenswrapper[7604]: I0309 16:35:00.730202 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-var-lock\") pod \"84d4d5a2-1544-4443-acc5-d7eee167a29c\" (UID: \"84d4d5a2-1544-4443-acc5-d7eee167a29c\") " Mar 09 16:35:00.730573 master-0 kubenswrapper[7604]: I0309 16:35:00.730327 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-var-lock" (OuterVolumeSpecName: "var-lock") pod "84d4d5a2-1544-4443-acc5-d7eee167a29c" (UID: "84d4d5a2-1544-4443-acc5-d7eee167a29c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:00.730573 master-0 kubenswrapper[7604]: I0309 16:35:00.730531 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84d4d5a2-1544-4443-acc5-d7eee167a29c" (UID: "84d4d5a2-1544-4443-acc5-d7eee167a29c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:00.733989 master-0 kubenswrapper[7604]: I0309 16:35:00.733918 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d4d5a2-1544-4443-acc5-d7eee167a29c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84d4d5a2-1544-4443-acc5-d7eee167a29c" (UID: "84d4d5a2-1544-4443-acc5-d7eee167a29c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:35:00.832762 master-0 kubenswrapper[7604]: I0309 16:35:00.832637 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:00.832762 master-0 kubenswrapper[7604]: I0309 16:35:00.832709 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84d4d5a2-1544-4443-acc5-d7eee167a29c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:00.832762 master-0 kubenswrapper[7604]: I0309 16:35:00.832725 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84d4d5a2-1544-4443-acc5-d7eee167a29c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:01.189537 master-0 kubenswrapper[7604]: I0309 16:35:01.189456 7604 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="116e02ef02114f2030248577cde62b42e1c5eea50c09ca56d92d93834a526424" exitCode=0 Mar 09 16:35:01.189894 master-0 kubenswrapper[7604]: I0309 16:35:01.189564 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerDied","Data":"116e02ef02114f2030248577cde62b42e1c5eea50c09ca56d92d93834a526424"} Mar 09 16:35:01.189894 master-0 kubenswrapper[7604]: I0309 16:35:01.189667 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"a77339a51d7a4ed44c4a920634c7235e0fcfd324430f106c1b1bcdd4dc11bacc"} Mar 09 16:35:01.189894 master-0 kubenswrapper[7604]: I0309 16:35:01.189699 7604 scope.go:117] "RemoveContainer" containerID="93efe9411f2e38cd517ba36a435f06b5ae09ea631b8beedeb3e3a210ec78c7fe" Mar 09 16:35:01.193020 master-0 kubenswrapper[7604]: I0309 16:35:01.192974 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_84d4d5a2-1544-4443-acc5-d7eee167a29c/installer/0.log" Mar 09 16:35:01.193112 master-0 kubenswrapper[7604]: I0309 16:35:01.193048 7604 generic.go:334] "Generic (PLEG): container finished" podID="84d4d5a2-1544-4443-acc5-d7eee167a29c" containerID="beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be" exitCode=1 Mar 09 16:35:01.193176 master-0 kubenswrapper[7604]: I0309 16:35:01.193133 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 09 16:35:01.193267 master-0 kubenswrapper[7604]: I0309 16:35:01.193134 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"84d4d5a2-1544-4443-acc5-d7eee167a29c","Type":"ContainerDied","Data":"beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be"} Mar 09 16:35:01.193345 master-0 kubenswrapper[7604]: I0309 16:35:01.193282 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"84d4d5a2-1544-4443-acc5-d7eee167a29c","Type":"ContainerDied","Data":"1efa3cc9328f0d084f5caf6acc39c884bf0742e3907b9693683e98c9f90f46cb"} Mar 09 16:35:01.213742 master-0 kubenswrapper[7604]: I0309 16:35:01.213699 7604 scope.go:117] "RemoveContainer" containerID="beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be" Mar 09 16:35:01.229370 master-0 kubenswrapper[7604]: I0309 16:35:01.229286 7604 scope.go:117] "RemoveContainer" containerID="beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be" Mar 09 16:35:01.229994 master-0 kubenswrapper[7604]: E0309 16:35:01.229925 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be\": container with ID starting with beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be not found: ID does not exist" containerID="beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be" Mar 09 16:35:01.230066 master-0 kubenswrapper[7604]: I0309 16:35:01.230011 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be"} err="failed to get container status \"beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be\": rpc error: code = NotFound desc = could not find container \"beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be\": container with ID starting with beede75dd18d1e20442a198296740d7eb061b749b32fc5ad91142194290539be not found: ID does not exist" Mar 09 16:35:01.339086 master-0 kubenswrapper[7604]: E0309 16:35:01.338960 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:01.726714 master-0 kubenswrapper[7604]: I0309 16:35:01.726368 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:35:01.730828 master-0 kubenswrapper[7604]: I0309 16:35:01.730760 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:01.730828 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:01.730828 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:01.730828 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:01.730999 master-0 kubenswrapper[7604]: I0309 16:35:01.730861 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:02.728639 master-0 kubenswrapper[7604]: I0309 16:35:02.728557 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:02.728639 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:02.728639 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:02.728639 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:02.729587 master-0 kubenswrapper[7604]: I0309 16:35:02.728664 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:02.913839 master-0 kubenswrapper[7604]: I0309 16:35:02.913709 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:35:02.915050 master-0 kubenswrapper[7604]: I0309 16:35:02.915015 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:35:02.915384 master-0 kubenswrapper[7604]: E0309 16:35:02.915337 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:35:03.216305 master-0 kubenswrapper[7604]: I0309 16:35:03.216213 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_f4f44499-c673-4c73-8ee9-dcef8914ce14/installer/0.log" Mar 09 16:35:03.216305 master-0 kubenswrapper[7604]: I0309 16:35:03.216303 7604 generic.go:334] "Generic (PLEG): container finished" podID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerID="e31e101fae28ad5c7e22332114d10cb8955a646e181d2af78e8c1a0573c9de55" exitCode=1 Mar 09 16:35:03.216712 master-0 kubenswrapper[7604]: I0309 16:35:03.216376 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"f4f44499-c673-4c73-8ee9-dcef8914ce14","Type":"ContainerDied","Data":"e31e101fae28ad5c7e22332114d10cb8955a646e181d2af78e8c1a0573c9de55"} Mar 09 16:35:03.729744 master-0 kubenswrapper[7604]: I0309 16:35:03.729665 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:03.729744 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:03.729744 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:03.729744 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:03.730650 master-0 kubenswrapper[7604]: I0309 16:35:03.730571 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:04.528043 master-0 kubenswrapper[7604]: I0309 16:35:04.527932 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_f4f44499-c673-4c73-8ee9-dcef8914ce14/installer/0.log" Mar 09 16:35:04.528043 master-0 kubenswrapper[7604]: I0309 16:35:04.528041 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:35:04.700617 master-0 kubenswrapper[7604]: I0309 16:35:04.700531 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-var-lock\") pod \"f4f44499-c673-4c73-8ee9-dcef8914ce14\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " Mar 09 16:35:04.701043 master-0 kubenswrapper[7604]: I0309 16:35:04.700655 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f44499-c673-4c73-8ee9-dcef8914ce14-kube-api-access\") pod \"f4f44499-c673-4c73-8ee9-dcef8914ce14\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " Mar 09 16:35:04.701043 master-0 kubenswrapper[7604]: I0309 16:35:04.700734 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-kubelet-dir\") pod \"f4f44499-c673-4c73-8ee9-dcef8914ce14\" (UID: \"f4f44499-c673-4c73-8ee9-dcef8914ce14\") " Mar 09 16:35:04.701043 master-0 kubenswrapper[7604]: I0309 16:35:04.700857 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-var-lock" (OuterVolumeSpecName: "var-lock") pod "f4f44499-c673-4c73-8ee9-dcef8914ce14" (UID: "f4f44499-c673-4c73-8ee9-dcef8914ce14"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:04.701177 master-0 kubenswrapper[7604]: I0309 16:35:04.701022 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f4f44499-c673-4c73-8ee9-dcef8914ce14" (UID: "f4f44499-c673-4c73-8ee9-dcef8914ce14"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:04.701177 master-0 kubenswrapper[7604]: I0309 16:35:04.701081 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:04.704770 master-0 kubenswrapper[7604]: I0309 16:35:04.704719 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f44499-c673-4c73-8ee9-dcef8914ce14-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f4f44499-c673-4c73-8ee9-dcef8914ce14" (UID: "f4f44499-c673-4c73-8ee9-dcef8914ce14"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:35:04.729376 master-0 kubenswrapper[7604]: I0309 16:35:04.729226 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:04.729376 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:04.729376 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:04.729376 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:04.729376 master-0 kubenswrapper[7604]: I0309 16:35:04.729305 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:04.802866 master-0 kubenswrapper[7604]: I0309 16:35:04.802746 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f4f44499-c673-4c73-8ee9-dcef8914ce14-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:04.802866 master-0 kubenswrapper[7604]: I0309 16:35:04.802825 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4f44499-c673-4c73-8ee9-dcef8914ce14-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:05.057488 master-0 kubenswrapper[7604]: I0309 16:35:05.057236 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:35:05.058511 master-0 kubenswrapper[7604]: I0309 16:35:05.058350 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:35:05.058883 master-0 kubenswrapper[7604]: E0309 16:35:05.058727 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:35:05.233907 master-0 kubenswrapper[7604]: I0309 16:35:05.233816 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_f4f44499-c673-4c73-8ee9-dcef8914ce14/installer/0.log" Mar 09 16:35:05.234285 master-0 kubenswrapper[7604]: I0309 16:35:05.233940 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"f4f44499-c673-4c73-8ee9-dcef8914ce14","Type":"ContainerDied","Data":"d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a"} Mar 09 16:35:05.234285 master-0 kubenswrapper[7604]: I0309 16:35:05.233979 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a" Mar 09 16:35:05.234285 master-0 kubenswrapper[7604]: I0309 16:35:05.234007 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:35:05.728658 master-0 kubenswrapper[7604]: I0309 16:35:05.728537 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:05.728658 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:05.728658 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:05.728658 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:05.729070 master-0 kubenswrapper[7604]: I0309 16:35:05.728665 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:06.730131 master-0 kubenswrapper[7604]: I0309 16:35:06.730045 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:06.730131 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:06.730131 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:06.730131 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:06.731007 master-0 kubenswrapper[7604]: I0309 16:35:06.730148 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:07.728579 master-0 kubenswrapper[7604]: I0309 16:35:07.728493 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:07.728579 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:07.728579 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:07.728579 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:07.729214 master-0 kubenswrapper[7604]: I0309 16:35:07.728628 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:08.725634 master-0 kubenswrapper[7604]: I0309 16:35:08.725565 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:35:08.727227 master-0 kubenswrapper[7604]: I0309 16:35:08.727185 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:08.727227 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:08.727227 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:08.727227 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:08.727467 master-0 kubenswrapper[7604]: I0309 16:35:08.727241 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:09.729791 master-0 kubenswrapper[7604]: I0309 16:35:09.729695 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:09.729791 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:09.729791 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:09.729791 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:09.730727 master-0 kubenswrapper[7604]: I0309 16:35:09.729809 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:10.111280 master-0 kubenswrapper[7604]: I0309 16:35:10.111061 7604 scope.go:117] "RemoveContainer" containerID="e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239" Mar 09 16:35:10.727814 master-0 kubenswrapper[7604]: I0309 16:35:10.727753 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:10.727814 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:10.727814 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:10.727814 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:10.728256 master-0 kubenswrapper[7604]: I0309 16:35:10.727840 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:11.292491 master-0 kubenswrapper[7604]: I0309 16:35:11.292385 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 09 16:35:11.294040 master-0 kubenswrapper[7604]: I0309 16:35:11.293992 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 09 16:35:11.295044 master-0 kubenswrapper[7604]: I0309 16:35:11.295009 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 09 16:35:11.295717 master-0 kubenswrapper[7604]: I0309 16:35:11.295683 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 09 16:35:11.297066 master-0 kubenswrapper[7604]: I0309 16:35:11.297018 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="9246a82d36d6d839dd216afb960c961d28bf9631aa040ddcbe7751de007686ca" exitCode=137 Mar 09 16:35:11.297066 master-0 kubenswrapper[7604]: I0309 16:35:11.297055 7604 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="deb49cc582b4f05da3e439b71cfab3c7b565bd681dbf4fabe99e76944648f931" exitCode=137 Mar 09 16:35:11.299915 master-0 kubenswrapper[7604]: I0309 16:35:11.299852 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"2d59ac76dc4be81acf3ade62baf431dad3208a3f0083ed9e5b09fbc150f0a9be"} Mar 09 16:35:11.340550 master-0 kubenswrapper[7604]: E0309 16:35:11.340378 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:11.434318 master-0 kubenswrapper[7604]: I0309 16:35:11.434222 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 09 16:35:11.435142 master-0 kubenswrapper[7604]: I0309 16:35:11.435108 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 09 16:35:11.435724 master-0 kubenswrapper[7604]: I0309 16:35:11.435689 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 09 16:35:11.436093 master-0 kubenswrapper[7604]: I0309 16:35:11.436064 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 09 16:35:11.437285 master-0 kubenswrapper[7604]: I0309 16:35:11.437137 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 09 16:35:11.614184 master-0 kubenswrapper[7604]: I0309 16:35:11.613886 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 09 16:35:11.614184 master-0 kubenswrapper[7604]: I0309 16:35:11.614064 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 09 16:35:11.614184 master-0 kubenswrapper[7604]: I0309 16:35:11.614106 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 09 16:35:11.614626 master-0 kubenswrapper[7604]: I0309 16:35:11.614170 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:11.614626 master-0 kubenswrapper[7604]: I0309 16:35:11.614308 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:11.614626 master-0 kubenswrapper[7604]: I0309 16:35:11.614218 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:11.614626 master-0 kubenswrapper[7604]: I0309 16:35:11.614297 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:11.614626 master-0 kubenswrapper[7604]: I0309 16:35:11.614235 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 09 16:35:11.614626 master-0 kubenswrapper[7604]: I0309 16:35:11.614521 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 09 16:35:11.614838 master-0 kubenswrapper[7604]: I0309 16:35:11.614607 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:11.614838 master-0 kubenswrapper[7604]: I0309 16:35:11.614717 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 09 16:35:11.614838 master-0 kubenswrapper[7604]: I0309 16:35:11.614786 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:35:11.615651 master-0 kubenswrapper[7604]: I0309 16:35:11.615610 7604 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:11.615651 master-0 kubenswrapper[7604]: I0309 16:35:11.615642 7604 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:11.615651 master-0 kubenswrapper[7604]: I0309 16:35:11.615656 7604 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:11.615795 master-0 kubenswrapper[7604]: I0309 16:35:11.615666 7604 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:11.615795 master-0 kubenswrapper[7604]: I0309 16:35:11.615676 7604 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:11.615795 master-0 kubenswrapper[7604]: I0309 16:35:11.615684 7604 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:35:11.730022 master-0 kubenswrapper[7604]: I0309 16:35:11.729894 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:11.730022 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:11.730022 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:11.730022 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:11.730022 master-0 kubenswrapper[7604]: I0309 16:35:11.730035 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:12.310582 master-0 kubenswrapper[7604]: I0309 16:35:12.310490 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 09 16:35:12.312958 master-0 kubenswrapper[7604]: I0309 16:35:12.312918 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 09 16:35:12.313768 master-0 kubenswrapper[7604]: I0309 16:35:12.313738 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 09 16:35:12.314446 master-0 kubenswrapper[7604]: I0309 16:35:12.314394 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 09 16:35:12.316628 master-0 kubenswrapper[7604]: I0309 16:35:12.316566 7604 scope.go:117] "RemoveContainer" containerID="cdee4fd47317482d2314470b8d7e76453519a7ffb89e09ee80444b9e7dc9b818" Mar 09 16:35:12.316758 master-0 kubenswrapper[7604]: I0309 16:35:12.316641 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 09 16:35:12.335876 master-0 kubenswrapper[7604]: I0309 16:35:12.335817 7604 scope.go:117] "RemoveContainer" containerID="019a53aacd83e37d8e9ec3c064556104c3d28abe8d9353b3fe0029fa09706cde" Mar 09 16:35:12.352069 master-0 kubenswrapper[7604]: I0309 16:35:12.352024 7604 scope.go:117] "RemoveContainer" containerID="1ef790e4963197709ca73ccb0ef459f616a446f12d2312254e29118d5fbf4647" Mar 09 16:35:12.370208 master-0 kubenswrapper[7604]: I0309 16:35:12.370139 7604 scope.go:117] "RemoveContainer" containerID="9246a82d36d6d839dd216afb960c961d28bf9631aa040ddcbe7751de007686ca" Mar 09 16:35:12.390977 master-0 kubenswrapper[7604]: I0309 16:35:12.390580 7604 scope.go:117] "RemoveContainer" containerID="deb49cc582b4f05da3e439b71cfab3c7b565bd681dbf4fabe99e76944648f931" Mar 09 16:35:12.415249 master-0 kubenswrapper[7604]: I0309 16:35:12.415191 7604 scope.go:117] "RemoveContainer" containerID="35ea1971363594acb6e2af9ffc0246bb0a5c5f470f8d574da32d0f7bbc775968" Mar 09 16:35:12.436921 master-0 kubenswrapper[7604]: I0309 16:35:12.436856 7604 scope.go:117] "RemoveContainer" containerID="77a3a31971fc786009b6ca6331ba76028043254e7a94076dc174933975c99fea" Mar 09 16:35:12.457205 master-0 kubenswrapper[7604]: I0309 16:35:12.457122 7604 scope.go:117] "RemoveContainer" containerID="ac7dbd1722f48f03cc15a7ad9f7c4d79c749293927a88ba8bf73c146e69f9d3b" Mar 09 16:35:12.729613 master-0 kubenswrapper[7604]: I0309 16:35:12.729500 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:12.729613 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:12.729613 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:12.729613 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:12.729613 master-0 kubenswrapper[7604]: I0309 16:35:12.729610 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:13.120286 master-0 kubenswrapper[7604]: I0309 16:35:13.120067 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 09 16:35:13.728563 master-0 kubenswrapper[7604]: I0309 16:35:13.728448 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:13.728563 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:13.728563 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:13.728563 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:13.728563 master-0 kubenswrapper[7604]: I0309 16:35:13.728547 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:14.730638 master-0 kubenswrapper[7604]: I0309 16:35:14.730522 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:14.730638 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:14.730638 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:14.730638 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:14.731651 master-0 kubenswrapper[7604]: I0309 16:35:14.730655 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:14.864167 master-0 kubenswrapper[7604]: E0309 16:35:14.863908 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189b397b7d17656b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:34:40.842868075 +0000 UTC m=+537.896837488,LastTimestamp:2026-03-09 16:34:40.842868075 +0000 UTC m=+537.896837488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:35:15.112091 master-0 kubenswrapper[7604]: I0309 16:35:15.111989 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:35:15.112537 master-0 kubenswrapper[7604]: E0309 16:35:15.112406 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:35:15.345386 master-0 kubenswrapper[7604]: I0309 16:35:15.345190 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/2.log" Mar 09 16:35:15.346468 master-0 kubenswrapper[7604]: I0309 16:35:15.346385 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/1.log" Mar 09 16:35:15.347082 master-0 kubenswrapper[7604]: I0309 16:35:15.346999 7604 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44" exitCode=1 Mar 09 16:35:15.347163 master-0 kubenswrapper[7604]: I0309 16:35:15.347086 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerDied","Data":"6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44"} Mar 09 16:35:15.347217 master-0 kubenswrapper[7604]: I0309 16:35:15.347174 7604 scope.go:117] "RemoveContainer" containerID="f8200a1495a7e1c37d6537537ac72284b2e4af062cfb0a0dbced10da1379a3d0" Mar 09 16:35:15.348078 master-0 kubenswrapper[7604]: I0309 16:35:15.348042 7604 scope.go:117] "RemoveContainer" containerID="6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44" Mar 09 16:35:15.348380 master-0 kubenswrapper[7604]: E0309 16:35:15.348344 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:35:15.728192 master-0 kubenswrapper[7604]: I0309 16:35:15.728112 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:15.728192 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:15.728192 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:15.728192 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:15.728192 master-0 kubenswrapper[7604]: I0309 16:35:15.728192 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:16.356365 master-0 kubenswrapper[7604]: I0309 16:35:16.356277 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/2.log" Mar 09 16:35:16.729239 master-0 kubenswrapper[7604]: I0309 16:35:16.729156 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:16.729239 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:16.729239 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:16.729239 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:16.729894 master-0 kubenswrapper[7604]: I0309 16:35:16.729799 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:17.728956 master-0 kubenswrapper[7604]: I0309 16:35:17.728826 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:17.728956 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:17.728956 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:17.728956 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:17.728956 master-0 kubenswrapper[7604]: I0309 16:35:17.728961 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:18.110882 master-0 kubenswrapper[7604]: I0309 16:35:18.110657 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 09 16:35:18.134349 master-0 kubenswrapper[7604]: I0309 16:35:18.134259 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:35:18.134349 master-0 kubenswrapper[7604]: I0309 16:35:18.134321 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:35:18.728648 master-0 kubenswrapper[7604]: I0309 16:35:18.728591 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:18.728648 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:18.728648 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:18.728648 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:18.729521 master-0 kubenswrapper[7604]: I0309 16:35:18.729324 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:19.728264 master-0 kubenswrapper[7604]: I0309 16:35:19.728207 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:19.728264 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:19.728264 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:19.728264 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:19.728595 master-0 kubenswrapper[7604]: I0309 16:35:19.728282 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:20.728860 master-0 kubenswrapper[7604]: I0309 16:35:20.728735 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:20.728860 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:20.728860 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:20.728860 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:20.729674 master-0 kubenswrapper[7604]: I0309 16:35:20.728881 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:21.340973 master-0 kubenswrapper[7604]: E0309 16:35:21.340819 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:21.728739 master-0 kubenswrapper[7604]: I0309 16:35:21.728613 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:21.728739 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:21.728739 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:21.728739 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:21.729557 master-0 kubenswrapper[7604]: I0309 16:35:21.728764 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:22.729901 master-0 kubenswrapper[7604]: I0309 16:35:22.729782 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:22.729901 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:22.729901 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:22.729901 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:22.730836 master-0 kubenswrapper[7604]: I0309 16:35:22.729905 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:23.729045 master-0 kubenswrapper[7604]: I0309 16:35:23.728975 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:23.729045 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:23.729045 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:23.729045 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:23.729668 master-0 kubenswrapper[7604]: I0309 16:35:23.729633 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:24.730308 master-0 kubenswrapper[7604]: I0309 16:35:24.730216 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:24.730308 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:24.730308 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:24.730308 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:24.731518 master-0 kubenswrapper[7604]: I0309 16:35:24.730321 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:25.730228 master-0 kubenswrapper[7604]: I0309 16:35:25.730065 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:25.730228 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:25.730228 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:25.730228 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:25.730228 master-0 kubenswrapper[7604]: I0309 16:35:25.730224 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:26.095167 master-0 kubenswrapper[7604]: E0309 16:35:26.094925 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:35:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:35:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:35:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:35:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:26.729852 master-0 kubenswrapper[7604]: I0309 16:35:26.729756 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:26.729852 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:26.729852 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:26.729852 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:26.729852 master-0 kubenswrapper[7604]: I0309 16:35:26.729856 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:27.729505 master-0 kubenswrapper[7604]: I0309 16:35:27.729413 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:27.729505 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:27.729505 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:27.729505 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:27.730149 master-0 kubenswrapper[7604]: I0309 16:35:27.729525 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:28.728568 master-0 kubenswrapper[7604]: I0309 16:35:28.728460 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:28.728568 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:28.728568 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:28.728568 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:28.728568 master-0 kubenswrapper[7604]: I0309 16:35:28.728573 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:29.110975 master-0 kubenswrapper[7604]: I0309 16:35:29.110779 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:35:29.111708 master-0 kubenswrapper[7604]: I0309 16:35:29.111671 7604 scope.go:117] "RemoveContainer" containerID="6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44" Mar 09 16:35:29.112271 master-0 kubenswrapper[7604]: E0309 16:35:29.111993 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:35:29.463898 master-0 kubenswrapper[7604]: I0309 16:35:29.463822 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"eaddeb74cad1cc379d9d86fe9a80bfaddf331db322541234b6d1f1fe5c41bc72"} Mar 09 16:35:29.728992 master-0 kubenswrapper[7604]: I0309 16:35:29.728790 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:29.728992 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:29.728992 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:29.728992 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:29.728992 master-0 kubenswrapper[7604]: I0309 16:35:29.728925 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:30.729119 master-0 kubenswrapper[7604]: I0309 16:35:30.729026 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:30.729119 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:30.729119 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:30.729119 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:30.730116 master-0 kubenswrapper[7604]: I0309 16:35:30.729132 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:31.342113 master-0 kubenswrapper[7604]: E0309 16:35:31.342007 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:31.342113 master-0 kubenswrapper[7604]: I0309 16:35:31.342094 7604 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 09 16:35:31.727967 master-0 kubenswrapper[7604]: I0309 16:35:31.727881 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:31.727967 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:31.727967 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:31.727967 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:31.728483 master-0 kubenswrapper[7604]: I0309 16:35:31.727988 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:32.728897 master-0 kubenswrapper[7604]: I0309 16:35:32.728820 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:32.728897 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:32.728897 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:32.728897 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:32.729975 master-0 kubenswrapper[7604]: I0309 16:35:32.728914 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:33.727960 master-0 kubenswrapper[7604]: I0309 16:35:33.727887 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:33.727960 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:33.727960 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:33.727960 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:33.727960 master-0 kubenswrapper[7604]: I0309 16:35:33.727959 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:34.504062 master-0 kubenswrapper[7604]: I0309 16:35:34.503980 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/1.log" Mar 09 16:35:34.504812 master-0 kubenswrapper[7604]: I0309 16:35:34.504759 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/0.log" Mar 09 16:35:34.505182 master-0 kubenswrapper[7604]: I0309 16:35:34.505134 7604 generic.go:334] "Generic (PLEG): container finished" podID="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" containerID="13f8ce747ae94aa028643a0d90bae20ae130da211dc31135e5f8daffa80a000f" exitCode=1 Mar 09 16:35:34.505260 master-0 kubenswrapper[7604]: I0309 16:35:34.505172 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerDied","Data":"13f8ce747ae94aa028643a0d90bae20ae130da211dc31135e5f8daffa80a000f"} Mar 09 16:35:34.505260 master-0 kubenswrapper[7604]: I0309 16:35:34.505228 7604 scope.go:117] "RemoveContainer" containerID="c33568491251a6cc29f433d394d9f99ae4624c6f4d925ee43ed4349c74f3003e" Mar 09 16:35:34.505883 master-0 kubenswrapper[7604]: I0309 16:35:34.505840 7604 scope.go:117] "RemoveContainer" containerID="13f8ce747ae94aa028643a0d90bae20ae130da211dc31135e5f8daffa80a000f" Mar 09 16:35:34.506091 master-0 kubenswrapper[7604]: E0309 16:35:34.506043 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-nqwd2_openshift-network-node-identity(60e07bf5-933c-4ff6-9a1a-2fd05392c8e9)\"" pod="openshift-network-node-identity/network-node-identity-nqwd2" podUID="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" Mar 09 16:35:34.729114 master-0 kubenswrapper[7604]: I0309 16:35:34.729007 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:34.729114 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:34.729114 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:34.729114 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:34.729114 master-0 kubenswrapper[7604]: I0309 16:35:34.729111 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:35.056793 master-0 kubenswrapper[7604]: I0309 16:35:35.056716 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:35:35.513522 master-0 kubenswrapper[7604]: I0309 16:35:35.513416 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/1.log" Mar 09 16:35:35.728399 master-0 kubenswrapper[7604]: I0309 16:35:35.728310 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:35.728399 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:35.728399 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:35.728399 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:35.728854 master-0 kubenswrapper[7604]: I0309 16:35:35.728479 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:36.096018 master-0 kubenswrapper[7604]: E0309 16:35:36.095933 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:36.730635 master-0 kubenswrapper[7604]: I0309 16:35:36.730544 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:36.730635 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:36.730635 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:36.730635 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:36.731548 master-0 kubenswrapper[7604]: I0309 16:35:36.730654 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:37.180887 master-0 kubenswrapper[7604]: I0309 16:35:37.180644 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:35:37.727737 master-0 kubenswrapper[7604]: I0309 16:35:37.727656 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:37.727737 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:37.727737 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:37.727737 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:37.727737 master-0 kubenswrapper[7604]: I0309 16:35:37.727748 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:38.057935 master-0 kubenswrapper[7604]: I0309 16:35:38.057623 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:38.727314 master-0 kubenswrapper[7604]: I0309 16:35:38.727251 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:38.727314 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:38.727314 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:38.727314 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:38.727680 master-0 kubenswrapper[7604]: I0309 16:35:38.727318 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:39.729146 master-0 kubenswrapper[7604]: I0309 16:35:39.729077 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:39.729146 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:39.729146 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:39.729146 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:39.729796 master-0 kubenswrapper[7604]: I0309 16:35:39.729166 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:40.727118 master-0 kubenswrapper[7604]: I0309 16:35:40.727052 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:40.727118 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:40.727118 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:40.727118 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:40.727413 master-0 kubenswrapper[7604]: I0309 16:35:40.727123 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:41.007494 master-0 kubenswrapper[7604]: I0309 16:35:41.007241 7604 status_manager.go:851] "Failed to get status for pod" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 09 16:35:41.343002 master-0 kubenswrapper[7604]: E0309 16:35:41.342823 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 09 16:35:41.729214 master-0 kubenswrapper[7604]: I0309 16:35:41.729121 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:41.729214 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:41.729214 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:41.729214 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:41.729764 master-0 kubenswrapper[7604]: I0309 16:35:41.729252 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:42.728162 master-0 kubenswrapper[7604]: I0309 16:35:42.728094 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:42.728162 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:42.728162 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:42.728162 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:42.728881 master-0 kubenswrapper[7604]: I0309 16:35:42.728164 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:43.728777 master-0 kubenswrapper[7604]: I0309 16:35:43.728701 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:43.728777 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:43.728777 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:43.728777 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:43.729566 master-0 kubenswrapper[7604]: I0309 16:35:43.728793 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:44.111925 master-0 kubenswrapper[7604]: I0309 16:35:44.111696 7604 scope.go:117] "RemoveContainer" containerID="6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44" Mar 09 16:35:44.582221 master-0 kubenswrapper[7604]: I0309 16:35:44.582127 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/2.log" Mar 09 16:35:44.583669 master-0 kubenswrapper[7604]: I0309 16:35:44.583613 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0"} Mar 09 16:35:44.730440 master-0 kubenswrapper[7604]: I0309 16:35:44.730307 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:44.730440 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:44.730440 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:44.730440 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:44.731314 master-0 kubenswrapper[7604]: I0309 16:35:44.730476 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:45.728325 master-0 kubenswrapper[7604]: I0309 16:35:45.728220 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:45.728325 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:45.728325 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:45.728325 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:45.728325 master-0 kubenswrapper[7604]: I0309 16:35:45.728322 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:46.097536 master-0 kubenswrapper[7604]: E0309 16:35:46.097183 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:46.111447 master-0 kubenswrapper[7604]: I0309 16:35:46.111363 7604 scope.go:117] "RemoveContainer" containerID="13f8ce747ae94aa028643a0d90bae20ae130da211dc31135e5f8daffa80a000f" Mar 09 16:35:46.601640 master-0 kubenswrapper[7604]: I0309 16:35:46.601590 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/1.log" Mar 09 16:35:46.602819 master-0 kubenswrapper[7604]: I0309 16:35:46.602748 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-nqwd2" event={"ID":"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9","Type":"ContainerStarted","Data":"0dc835c31eb639c75d4498942acddf82a61df43d5a8bde0d772606e0e747e100"} Mar 09 16:35:46.728541 master-0 kubenswrapper[7604]: I0309 16:35:46.728449 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:46.728541 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:46.728541 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:46.728541 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:46.728977 master-0 kubenswrapper[7604]: I0309 16:35:46.728557 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:47.729508 master-0 kubenswrapper[7604]: I0309 16:35:47.729397 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:47.729508 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:47.729508 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:47.729508 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:47.730483 master-0 kubenswrapper[7604]: I0309 16:35:47.729517 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:48.057450 master-0 kubenswrapper[7604]: I0309 16:35:48.057088 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:48.727703 master-0 kubenswrapper[7604]: I0309 16:35:48.727581 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:48.727703 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:48.727703 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:48.727703 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:48.728192 master-0 kubenswrapper[7604]: I0309 16:35:48.727807 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:48.867459 master-0 kubenswrapper[7604]: E0309 16:35:48.867269 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189b397f0cbfca0c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:34:56.13794766 +0000 UTC m=+553.191917083,LastTimestamp:2026-03-09 16:34:56.13794766 +0000 UTC m=+553.191917083,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:35:49.727661 master-0 kubenswrapper[7604]: I0309 16:35:49.727592 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:49.727661 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:49.727661 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:49.727661 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:49.727973 master-0 kubenswrapper[7604]: I0309 16:35:49.727693 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:50.728783 master-0 kubenswrapper[7604]: I0309 16:35:50.728682 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:50.728783 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:50.728783 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:50.728783 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:50.729782 master-0 kubenswrapper[7604]: I0309 16:35:50.728796 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:51.544732 master-0 kubenswrapper[7604]: E0309 16:35:51.544556 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 09 16:35:51.729401 master-0 kubenswrapper[7604]: I0309 16:35:51.729296 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:51.729401 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:51.729401 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:51.729401 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:51.730315 master-0 kubenswrapper[7604]: I0309 16:35:51.729436 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:52.137089 master-0 kubenswrapper[7604]: E0309 16:35:52.137024 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 09 16:35:52.137683 master-0 kubenswrapper[7604]: I0309 16:35:52.137572 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 09 16:35:52.158515 master-0 kubenswrapper[7604]: W0309 16:35:52.158446 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c WatchSource:0}: Error finding container 36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c: Status 404 returned error can't find the container with id 36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c Mar 09 16:35:52.646282 master-0 kubenswrapper[7604]: I0309 16:35:52.646200 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"626cde970cdf775bf13812ca4ee6f26bea7e402f4efa0e9b555e7bcc797f2635"} Mar 09 16:35:52.646282 master-0 kubenswrapper[7604]: I0309 16:35:52.646282 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c"} Mar 09 16:35:52.646796 master-0 kubenswrapper[7604]: I0309 16:35:52.646759 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:35:52.646796 master-0 kubenswrapper[7604]: I0309 16:35:52.646788 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:35:52.728994 master-0 kubenswrapper[7604]: I0309 16:35:52.728939 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:52.728994 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:52.728994 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:52.728994 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:52.729387 master-0 kubenswrapper[7604]: I0309 16:35:52.729004 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:53.657447 master-0 kubenswrapper[7604]: I0309 16:35:53.657325 7604 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="626cde970cdf775bf13812ca4ee6f26bea7e402f4efa0e9b555e7bcc797f2635" exitCode=0 Mar 09 16:35:53.657447 master-0 kubenswrapper[7604]: I0309 16:35:53.657418 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"626cde970cdf775bf13812ca4ee6f26bea7e402f4efa0e9b555e7bcc797f2635"} Mar 09 16:35:53.729176 master-0 kubenswrapper[7604]: I0309 16:35:53.729034 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:53.729176 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:53.729176 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:53.729176 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:53.729715 master-0 kubenswrapper[7604]: I0309 16:35:53.729191 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:54.729040 master-0 kubenswrapper[7604]: I0309 16:35:54.728948 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:54.729040 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:54.729040 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:54.729040 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:54.729040 master-0 kubenswrapper[7604]: I0309 16:35:54.729039 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:55.728489 master-0 kubenswrapper[7604]: I0309 16:35:55.728325 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:55.728489 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:55.728489 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:55.728489 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:55.728962 master-0 kubenswrapper[7604]: I0309 16:35:55.728508 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:56.099059 master-0 kubenswrapper[7604]: E0309 16:35:56.098863 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:56.728191 master-0 kubenswrapper[7604]: I0309 16:35:56.728111 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:56.728191 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:56.728191 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:56.728191 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:56.728621 master-0 kubenswrapper[7604]: I0309 16:35:56.728220 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:57.729316 master-0 kubenswrapper[7604]: I0309 16:35:57.729218 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:57.729316 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:57.729316 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:57.729316 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:57.730147 master-0 kubenswrapper[7604]: I0309 16:35:57.729330 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:58.057535 master-0 kubenswrapper[7604]: I0309 16:35:58.057252 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:35:58.057535 master-0 kubenswrapper[7604]: I0309 16:35:58.057492 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:35:58.058767 master-0 kubenswrapper[7604]: I0309 16:35:58.058682 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"eaddeb74cad1cc379d9d86fe9a80bfaddf331db322541234b6d1f1fe5c41bc72"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 09 16:35:58.058839 master-0 kubenswrapper[7604]: I0309 16:35:58.058808 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://eaddeb74cad1cc379d9d86fe9a80bfaddf331db322541234b6d1f1fe5c41bc72" gracePeriod=30 Mar 09 16:35:58.697569 master-0 kubenswrapper[7604]: I0309 16:35:58.697495 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="eaddeb74cad1cc379d9d86fe9a80bfaddf331db322541234b6d1f1fe5c41bc72" exitCode=2 Mar 09 16:35:58.697975 master-0 kubenswrapper[7604]: I0309 16:35:58.697927 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"eaddeb74cad1cc379d9d86fe9a80bfaddf331db322541234b6d1f1fe5c41bc72"} Mar 09 16:35:58.698086 master-0 kubenswrapper[7604]: I0309 16:35:58.698072 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6"} Mar 09 16:35:58.698171 master-0 kubenswrapper[7604]: I0309 16:35:58.698158 7604 scope.go:117] "RemoveContainer" containerID="9a9ddb96d4c10cc99dc834f80948637fe857f3fae07578d589ccda8a00e571f6" Mar 09 16:35:58.728330 master-0 kubenswrapper[7604]: I0309 16:35:58.728242 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:58.728330 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:58.728330 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:58.728330 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:58.728732 master-0 kubenswrapper[7604]: I0309 16:35:58.728365 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:35:59.728535 master-0 kubenswrapper[7604]: I0309 16:35:59.728471 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:35:59.728535 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:35:59.728535 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:35:59.728535 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:35:59.729515 master-0 kubenswrapper[7604]: I0309 16:35:59.728562 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:00.728609 master-0 kubenswrapper[7604]: I0309 16:36:00.728498 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:00.728609 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:00.728609 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:00.728609 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:00.729379 master-0 kubenswrapper[7604]: I0309 16:36:00.728648 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:01.728460 master-0 kubenswrapper[7604]: I0309 16:36:01.728348 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:01.728460 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:01.728460 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:01.728460 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:01.728460 master-0 kubenswrapper[7604]: I0309 16:36:01.728456 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:01.946065 master-0 kubenswrapper[7604]: E0309 16:36:01.945932 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="800ms" Mar 09 16:36:02.728347 master-0 kubenswrapper[7604]: I0309 16:36:02.728263 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:02.728347 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:02.728347 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:02.728347 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:02.729350 master-0 kubenswrapper[7604]: I0309 16:36:02.728381 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:03.729111 master-0 kubenswrapper[7604]: I0309 16:36:03.729057 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:03.729111 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:03.729111 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:03.729111 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:03.729902 master-0 kubenswrapper[7604]: I0309 16:36:03.729776 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:04.728654 master-0 kubenswrapper[7604]: I0309 16:36:04.728557 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:04.728654 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:04.728654 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:04.728654 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:04.729064 master-0 kubenswrapper[7604]: I0309 16:36:04.728677 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:05.057412 master-0 kubenswrapper[7604]: I0309 16:36:05.057189 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:36:05.729414 master-0 kubenswrapper[7604]: I0309 16:36:05.729325 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:05.729414 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:05.729414 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:05.729414 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:05.729897 master-0 kubenswrapper[7604]: I0309 16:36:05.729461 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:06.099790 master-0 kubenswrapper[7604]: E0309 16:36:06.099611 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:36:06.099790 master-0 kubenswrapper[7604]: E0309 16:36:06.099666 7604 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 16:36:06.728602 master-0 kubenswrapper[7604]: I0309 16:36:06.728510 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:06.728602 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:06.728602 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:06.728602 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:06.728914 master-0 kubenswrapper[7604]: I0309 16:36:06.728632 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:07.188647 master-0 kubenswrapper[7604]: I0309 16:36:07.182038 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:36:07.727565 master-0 kubenswrapper[7604]: I0309 16:36:07.727483 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:07.727565 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:07.727565 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:07.727565 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:07.728057 master-0 kubenswrapper[7604]: I0309 16:36:07.727590 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:08.058301 master-0 kubenswrapper[7604]: I0309 16:36:08.058119 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:36:08.728469 master-0 kubenswrapper[7604]: I0309 16:36:08.728409 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:08.728469 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:08.728469 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:08.728469 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:08.729061 master-0 kubenswrapper[7604]: I0309 16:36:08.728498 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:09.728492 master-0 kubenswrapper[7604]: I0309 16:36:09.728388 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:09.728492 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:09.728492 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:09.728492 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:09.730172 master-0 kubenswrapper[7604]: I0309 16:36:09.730077 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:10.727169 master-0 kubenswrapper[7604]: I0309 16:36:10.727107 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:10.727169 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:10.727169 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:10.727169 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:10.727613 master-0 kubenswrapper[7604]: I0309 16:36:10.727194 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:11.728853 master-0 kubenswrapper[7604]: I0309 16:36:11.728759 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:11.728853 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:11.728853 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:11.728853 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:11.728853 master-0 kubenswrapper[7604]: I0309 16:36:11.728832 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:12.729260 master-0 kubenswrapper[7604]: I0309 16:36:12.729113 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:12.729260 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:12.729260 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:12.729260 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:12.729260 master-0 kubenswrapper[7604]: I0309 16:36:12.729227 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:12.747681 master-0 kubenswrapper[7604]: E0309 16:36:12.747531 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 09 16:36:13.728996 master-0 kubenswrapper[7604]: I0309 16:36:13.728907 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:13.728996 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:13.728996 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:13.728996 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:13.728996 master-0 kubenswrapper[7604]: I0309 16:36:13.728991 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:14.728920 master-0 kubenswrapper[7604]: I0309 16:36:14.728802 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:14.728920 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:14.728920 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:14.728920 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:14.729956 master-0 kubenswrapper[7604]: I0309 16:36:14.728957 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:15.728465 master-0 kubenswrapper[7604]: I0309 16:36:15.728337 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:15.728465 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:15.728465 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:15.728465 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:15.728999 master-0 kubenswrapper[7604]: I0309 16:36:15.728479 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:16.728972 master-0 kubenswrapper[7604]: I0309 16:36:16.728885 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:16.728972 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:16.728972 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:16.728972 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:16.730046 master-0 kubenswrapper[7604]: I0309 16:36:16.729550 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:17.727667 master-0 kubenswrapper[7604]: I0309 16:36:17.727387 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:17.727667 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:17.727667 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:17.727667 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:17.727667 master-0 kubenswrapper[7604]: I0309 16:36:17.727503 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:18.058513 master-0 kubenswrapper[7604]: I0309 16:36:18.058299 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:36:18.728368 master-0 kubenswrapper[7604]: I0309 16:36:18.728302 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:18.728368 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:18.728368 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:18.728368 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:18.728684 master-0 kubenswrapper[7604]: I0309 16:36:18.728375 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:19.727048 master-0 kubenswrapper[7604]: I0309 16:36:19.726964 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:19.727048 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:19.727048 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:19.727048 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:19.727679 master-0 kubenswrapper[7604]: I0309 16:36:19.727050 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:20.729113 master-0 kubenswrapper[7604]: I0309 16:36:20.729005 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:20.729113 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:20.729113 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:20.729113 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:20.730001 master-0 kubenswrapper[7604]: I0309 16:36:20.729143 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:21.729095 master-0 kubenswrapper[7604]: I0309 16:36:21.728994 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:21.729095 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:21.729095 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:21.729095 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:21.729095 master-0 kubenswrapper[7604]: I0309 16:36:21.729115 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:22.727760 master-0 kubenswrapper[7604]: I0309 16:36:22.727687 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:22.727760 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:22.727760 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:22.727760 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:22.728413 master-0 kubenswrapper[7604]: I0309 16:36:22.727947 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:22.871681 master-0 kubenswrapper[7604]: E0309 16:36:22.871150 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b397f0cf8519a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:34:56.141652378 +0000 UTC m=+553.195621821,LastTimestamp:2026-03-09 16:34:56.141652378 +0000 UTC m=+553.195621821,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:36:23.728713 master-0 kubenswrapper[7604]: I0309 16:36:23.728552 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:23.728713 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:23.728713 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:23.728713 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:23.728713 master-0 kubenswrapper[7604]: I0309 16:36:23.728623 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:24.348500 master-0 kubenswrapper[7604]: E0309 16:36:24.348374 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 09 16:36:24.729888 master-0 kubenswrapper[7604]: I0309 16:36:24.729786 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:24.729888 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:24.729888 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:24.729888 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:24.730399 master-0 kubenswrapper[7604]: I0309 16:36:24.729924 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:25.729261 master-0 kubenswrapper[7604]: I0309 16:36:25.729167 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:25.729261 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:25.729261 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:25.729261 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:25.730123 master-0 kubenswrapper[7604]: I0309 16:36:25.729292 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:26.650782 master-0 kubenswrapper[7604]: E0309 16:36:26.650603 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 09 16:36:26.729259 master-0 kubenswrapper[7604]: I0309 16:36:26.729131 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:26.729259 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:26.729259 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:26.729259 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:26.729259 master-0 kubenswrapper[7604]: I0309 16:36:26.729244 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:26.897569 master-0 kubenswrapper[7604]: I0309 16:36:26.897498 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:36:26.897569 master-0 kubenswrapper[7604]: I0309 16:36:26.897536 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:36:27.728083 master-0 kubenswrapper[7604]: I0309 16:36:27.728014 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:27.728083 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:27.728083 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:27.728083 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:27.728472 master-0 kubenswrapper[7604]: I0309 16:36:27.728096 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:28.057459 master-0 kubenswrapper[7604]: I0309 16:36:28.057165 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:36:28.057459 master-0 kubenswrapper[7604]: I0309 16:36:28.057346 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:36:28.058763 master-0 kubenswrapper[7604]: I0309 16:36:28.058329 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 09 16:36:28.058763 master-0 kubenswrapper[7604]: I0309 16:36:28.058398 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" gracePeriod=30 Mar 09 16:36:28.181673 master-0 kubenswrapper[7604]: E0309 16:36:28.181576 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:36:28.729465 master-0 kubenswrapper[7604]: I0309 16:36:28.729361 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:28.729465 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:28.729465 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:28.729465 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:28.729465 master-0 kubenswrapper[7604]: I0309 16:36:28.729451 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:28.912174 master-0 kubenswrapper[7604]: I0309 16:36:28.912090 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" exitCode=2 Mar 09 16:36:28.912174 master-0 kubenswrapper[7604]: I0309 16:36:28.912155 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6"} Mar 09 16:36:28.912504 master-0 kubenswrapper[7604]: I0309 16:36:28.912202 7604 scope.go:117] "RemoveContainer" containerID="eaddeb74cad1cc379d9d86fe9a80bfaddf331db322541234b6d1f1fe5c41bc72" Mar 09 16:36:28.913212 master-0 kubenswrapper[7604]: I0309 16:36:28.913180 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:36:28.913682 master-0 kubenswrapper[7604]: E0309 16:36:28.913570 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:36:29.729311 master-0 kubenswrapper[7604]: I0309 16:36:29.729227 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:29.729311 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:29.729311 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:29.729311 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:29.730181 master-0 kubenswrapper[7604]: I0309 16:36:29.729332 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:30.728247 master-0 kubenswrapper[7604]: I0309 16:36:30.728183 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:30.728247 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:30.728247 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:30.728247 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:30.728247 master-0 kubenswrapper[7604]: I0309 16:36:30.728245 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:31.727506 master-0 kubenswrapper[7604]: I0309 16:36:31.727412 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:31.727506 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:31.727506 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:31.727506 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:31.728157 master-0 kubenswrapper[7604]: I0309 16:36:31.727515 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:32.728629 master-0 kubenswrapper[7604]: I0309 16:36:32.728537 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:32.728629 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:32.728629 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:32.728629 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:32.729280 master-0 kubenswrapper[7604]: I0309 16:36:32.728647 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:32.914488 master-0 kubenswrapper[7604]: I0309 16:36:32.914409 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:36:32.915126 master-0 kubenswrapper[7604]: I0309 16:36:32.915100 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:36:32.915350 master-0 kubenswrapper[7604]: E0309 16:36:32.915324 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:36:32.944222 master-0 kubenswrapper[7604]: I0309 16:36:32.944144 7604 generic.go:334] "Generic (PLEG): container finished" podID="5b9030c9-7f5f-4e54-ae93-140469e3558b" containerID="66330a4bd334b8d1827e4db59cc4dd96a4c0efbd28a98ca757e4b3ea6788abd7" exitCode=0 Mar 09 16:36:32.944557 master-0 kubenswrapper[7604]: I0309 16:36:32.944224 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" event={"ID":"5b9030c9-7f5f-4e54-ae93-140469e3558b","Type":"ContainerDied","Data":"66330a4bd334b8d1827e4db59cc4dd96a4c0efbd28a98ca757e4b3ea6788abd7"} Mar 09 16:36:32.945107 master-0 kubenswrapper[7604]: I0309 16:36:32.945067 7604 scope.go:117] "RemoveContainer" containerID="66330a4bd334b8d1827e4db59cc4dd96a4c0efbd28a98ca757e4b3ea6788abd7" Mar 09 16:36:33.728857 master-0 kubenswrapper[7604]: I0309 16:36:33.728799 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:33.728857 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:33.728857 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:33.728857 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:33.729924 master-0 kubenswrapper[7604]: I0309 16:36:33.729797 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:33.957053 master-0 kubenswrapper[7604]: I0309 16:36:33.956937 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" event={"ID":"5b9030c9-7f5f-4e54-ae93-140469e3558b","Type":"ContainerStarted","Data":"4fba0c0837af3ad18f5142ad883c86c3e5ad74780a83392522fd079d08a98a5e"} Mar 09 16:36:33.957703 master-0 kubenswrapper[7604]: I0309 16:36:33.957638 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:36:33.959082 master-0 kubenswrapper[7604]: I0309 16:36:33.959039 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:36:33.960517 master-0 kubenswrapper[7604]: I0309 16:36:33.960481 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/cluster-cloud-controller-manager/0.log" Mar 09 16:36:33.960599 master-0 kubenswrapper[7604]: I0309 16:36:33.960529 7604 generic.go:334] "Generic (PLEG): container finished" podID="ea34ff7e-27fa-4c26-acc0-ec551985eb76" containerID="cd71269592a701160cbe606bc3b5a764b96e0af9d702d7660f9fc5b18a628065" exitCode=1 Mar 09 16:36:33.960599 master-0 kubenswrapper[7604]: I0309 16:36:33.960565 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerDied","Data":"cd71269592a701160cbe606bc3b5a764b96e0af9d702d7660f9fc5b18a628065"} Mar 09 16:36:33.961519 master-0 kubenswrapper[7604]: I0309 16:36:33.961365 7604 scope.go:117] "RemoveContainer" containerID="cd71269592a701160cbe606bc3b5a764b96e0af9d702d7660f9fc5b18a628065" Mar 09 16:36:34.728646 master-0 kubenswrapper[7604]: I0309 16:36:34.728551 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:34.728646 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:34.728646 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:34.728646 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:34.729377 master-0 kubenswrapper[7604]: I0309 16:36:34.728650 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:34.970381 master-0 kubenswrapper[7604]: I0309 16:36:34.970318 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/cluster-cloud-controller-manager/0.log" Mar 09 16:36:34.970648 master-0 kubenswrapper[7604]: I0309 16:36:34.970451 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerStarted","Data":"69c27a0613d88f3e58ea9929d6c74b06e59d3423646696a9f8c27c0fefb2ff66"} Mar 09 16:36:35.728462 master-0 kubenswrapper[7604]: I0309 16:36:35.728339 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:35.728462 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:35.728462 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:35.728462 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:35.728909 master-0 kubenswrapper[7604]: I0309 16:36:35.728483 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:36.729232 master-0 kubenswrapper[7604]: I0309 16:36:36.729119 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:36.729232 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:36.729232 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:36.729232 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:36.729232 master-0 kubenswrapper[7604]: I0309 16:36:36.729218 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:37.549934 master-0 kubenswrapper[7604]: E0309 16:36:37.549842 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 09 16:36:37.728285 master-0 kubenswrapper[7604]: I0309 16:36:37.728145 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:37.728285 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:37.728285 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:37.728285 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:37.728285 master-0 kubenswrapper[7604]: I0309 16:36:37.728245 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:38.728396 master-0 kubenswrapper[7604]: I0309 16:36:38.728309 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:38.728396 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:38.728396 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:38.728396 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:38.729568 master-0 kubenswrapper[7604]: I0309 16:36:38.729525 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:39.728833 master-0 kubenswrapper[7604]: I0309 16:36:39.728705 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:39.728833 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:39.728833 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:39.728833 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:39.729767 master-0 kubenswrapper[7604]: I0309 16:36:39.728849 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:40.727977 master-0 kubenswrapper[7604]: I0309 16:36:40.727894 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:40.727977 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:40.727977 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:40.727977 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:40.728217 master-0 kubenswrapper[7604]: I0309 16:36:40.727995 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:41.008907 master-0 kubenswrapper[7604]: I0309 16:36:41.008812 7604 status_manager.go:851] "Failed to get status for pod" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" pod="openshift-kube-apiserver/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 09 16:36:41.013800 master-0 kubenswrapper[7604]: I0309 16:36:41.013741 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/0.log" Mar 09 16:36:41.013883 master-0 kubenswrapper[7604]: I0309 16:36:41.013820 7604 generic.go:334] "Generic (PLEG): container finished" podID="57036838-9f42-4ea1-a5c9-77f820cc22c9" containerID="ca16434e380b6db2be43284967084d34f8d84b54a570fafe10c2de9a729bf691" exitCode=1 Mar 09 16:36:41.013943 master-0 kubenswrapper[7604]: I0309 16:36:41.013896 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerDied","Data":"ca16434e380b6db2be43284967084d34f8d84b54a570fafe10c2de9a729bf691"} Mar 09 16:36:41.017053 master-0 kubenswrapper[7604]: I0309 16:36:41.016989 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-xrgml_4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/manager/0.log" Mar 09 16:36:41.017613 master-0 kubenswrapper[7604]: I0309 16:36:41.017553 7604 generic.go:334] "Generic (PLEG): container finished" podID="4a2aa6f3-f049-423a-a8f5-5d33fc214a7b" containerID="4fc5ebe625ed54c3d67f7a4689964a54c61c83f3612ec773524ffd6c73856293" exitCode=1 Mar 09 16:36:41.017684 master-0 kubenswrapper[7604]: I0309 16:36:41.017643 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" event={"ID":"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b","Type":"ContainerDied","Data":"4fc5ebe625ed54c3d67f7a4689964a54c61c83f3612ec773524ffd6c73856293"} Mar 09 16:36:41.018321 master-0 kubenswrapper[7604]: I0309 16:36:41.018277 7604 scope.go:117] "RemoveContainer" containerID="ca16434e380b6db2be43284967084d34f8d84b54a570fafe10c2de9a729bf691" Mar 09 16:36:41.018919 master-0 kubenswrapper[7604]: I0309 16:36:41.018877 7604 scope.go:117] "RemoveContainer" containerID="4fc5ebe625ed54c3d67f7a4689964a54c61c83f3612ec773524ffd6c73856293" Mar 09 16:36:41.020794 master-0 kubenswrapper[7604]: I0309 16:36:41.020747 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-tnbvb_c72e89f0-37ad-4515-89ba-ba1f52ba61f0/manager/0.log" Mar 09 16:36:41.020932 master-0 kubenswrapper[7604]: I0309 16:36:41.020801 7604 generic.go:334] "Generic (PLEG): container finished" podID="c72e89f0-37ad-4515-89ba-ba1f52ba61f0" containerID="eb0d4a5cd6b917ab3136d6670a91daed3539d6022e53b4e8f77735bc48ef873e" exitCode=1 Mar 09 16:36:41.020932 master-0 kubenswrapper[7604]: I0309 16:36:41.020843 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" event={"ID":"c72e89f0-37ad-4515-89ba-ba1f52ba61f0","Type":"ContainerDied","Data":"eb0d4a5cd6b917ab3136d6670a91daed3539d6022e53b4e8f77735bc48ef873e"} Mar 09 16:36:41.021210 master-0 kubenswrapper[7604]: I0309 16:36:41.021184 7604 scope.go:117] "RemoveContainer" containerID="eb0d4a5cd6b917ab3136d6670a91daed3539d6022e53b4e8f77735bc48ef873e" Mar 09 16:36:41.729555 master-0 kubenswrapper[7604]: I0309 16:36:41.729442 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:41.729555 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:41.729555 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:41.729555 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:41.729555 master-0 kubenswrapper[7604]: I0309 16:36:41.729523 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:42.030683 master-0 kubenswrapper[7604]: I0309 16:36:42.030619 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_3a8a48b1-d4a9-48fb-912e-2f793a6d8478/installer/0.log" Mar 09 16:36:42.030683 master-0 kubenswrapper[7604]: I0309 16:36:42.030686 7604 generic.go:334] "Generic (PLEG): container finished" podID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerID="1f3ede07b96bf06c243e7982afd5fe4a072e8a3d04eb6bffe1b7a50cca581cf9" exitCode=1 Mar 09 16:36:42.031413 master-0 kubenswrapper[7604]: I0309 16:36:42.030768 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3a8a48b1-d4a9-48fb-912e-2f793a6d8478","Type":"ContainerDied","Data":"1f3ede07b96bf06c243e7982afd5fe4a072e8a3d04eb6bffe1b7a50cca581cf9"} Mar 09 16:36:42.033364 master-0 kubenswrapper[7604]: I0309 16:36:42.033327 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-xrgml_4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/manager/0.log" Mar 09 16:36:42.033902 master-0 kubenswrapper[7604]: I0309 16:36:42.033865 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" event={"ID":"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b","Type":"ContainerStarted","Data":"74cb1ad665d4ea813db73dabdb5deaf235cafd4e2f719c6c0cb37782a4c786c9"} Mar 09 16:36:42.034131 master-0 kubenswrapper[7604]: I0309 16:36:42.034102 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:36:42.036989 master-0 kubenswrapper[7604]: I0309 16:36:42.036928 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-tnbvb_c72e89f0-37ad-4515-89ba-ba1f52ba61f0/manager/0.log" Mar 09 16:36:42.037128 master-0 kubenswrapper[7604]: I0309 16:36:42.037093 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" event={"ID":"c72e89f0-37ad-4515-89ba-ba1f52ba61f0","Type":"ContainerStarted","Data":"41004e32c5fcb4908d1be814356cf757c4d463cba79943b2dcb6f1a15014613d"} Mar 09 16:36:42.037412 master-0 kubenswrapper[7604]: I0309 16:36:42.037354 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:36:42.039862 master-0 kubenswrapper[7604]: I0309 16:36:42.039826 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/0.log" Mar 09 16:36:42.039921 master-0 kubenswrapper[7604]: I0309 16:36:42.039905 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerStarted","Data":"c81334b89261d35be2255091a39d304d36bd86f871bd8f896eb0a73bdb6d3990"} Mar 09 16:36:42.728904 master-0 kubenswrapper[7604]: I0309 16:36:42.728813 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:42.728904 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:42.728904 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:42.728904 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:42.729379 master-0 kubenswrapper[7604]: I0309 16:36:42.728924 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:43.364656 master-0 kubenswrapper[7604]: I0309 16:36:43.364575 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_3a8a48b1-d4a9-48fb-912e-2f793a6d8478/installer/0.log" Mar 09 16:36:43.365392 master-0 kubenswrapper[7604]: I0309 16:36:43.364698 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:36:43.387093 master-0 kubenswrapper[7604]: I0309 16:36:43.387010 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kube-api-access\") pod \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " Mar 09 16:36:43.387503 master-0 kubenswrapper[7604]: I0309 16:36:43.387130 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-var-lock\") pod \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " Mar 09 16:36:43.387503 master-0 kubenswrapper[7604]: I0309 16:36:43.387280 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a8a48b1-d4a9-48fb-912e-2f793a6d8478" (UID: "3a8a48b1-d4a9-48fb-912e-2f793a6d8478"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:36:43.387503 master-0 kubenswrapper[7604]: I0309 16:36:43.387323 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kubelet-dir\") pod \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\" (UID: \"3a8a48b1-d4a9-48fb-912e-2f793a6d8478\") " Mar 09 16:36:43.387503 master-0 kubenswrapper[7604]: I0309 16:36:43.387409 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3a8a48b1-d4a9-48fb-912e-2f793a6d8478" (UID: "3a8a48b1-d4a9-48fb-912e-2f793a6d8478"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:36:43.387763 master-0 kubenswrapper[7604]: I0309 16:36:43.387725 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:36:43.387763 master-0 kubenswrapper[7604]: I0309 16:36:43.387753 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:36:43.391937 master-0 kubenswrapper[7604]: I0309 16:36:43.391882 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3a8a48b1-d4a9-48fb-912e-2f793a6d8478" (UID: "3a8a48b1-d4a9-48fb-912e-2f793a6d8478"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:36:43.488359 master-0 kubenswrapper[7604]: I0309 16:36:43.488291 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a8a48b1-d4a9-48fb-912e-2f793a6d8478-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:36:43.728861 master-0 kubenswrapper[7604]: I0309 16:36:43.728753 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:43.728861 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:43.728861 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:43.728861 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:43.729257 master-0 kubenswrapper[7604]: I0309 16:36:43.728858 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:44.059071 master-0 kubenswrapper[7604]: I0309 16:36:44.058861 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_3a8a48b1-d4a9-48fb-912e-2f793a6d8478/installer/0.log" Mar 09 16:36:44.059071 master-0 kubenswrapper[7604]: I0309 16:36:44.058963 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"3a8a48b1-d4a9-48fb-912e-2f793a6d8478","Type":"ContainerDied","Data":"ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15"} Mar 09 16:36:44.059071 master-0 kubenswrapper[7604]: I0309 16:36:44.059004 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15" Mar 09 16:36:44.059071 master-0 kubenswrapper[7604]: I0309 16:36:44.059072 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:36:44.729101 master-0 kubenswrapper[7604]: I0309 16:36:44.729004 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:44.729101 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:44.729101 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:44.729101 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:44.729896 master-0 kubenswrapper[7604]: I0309 16:36:44.729121 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:45.728354 master-0 kubenswrapper[7604]: I0309 16:36:45.728216 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:45.728354 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:45.728354 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:45.728354 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:45.728354 master-0 kubenswrapper[7604]: I0309 16:36:45.728356 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:46.728784 master-0 kubenswrapper[7604]: I0309 16:36:46.728685 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:46.728784 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:46.728784 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:46.728784 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:46.729812 master-0 kubenswrapper[7604]: I0309 16:36:46.728825 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:46.867853 master-0 kubenswrapper[7604]: E0309 16:36:46.867766 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:36:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:36:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:36:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:36:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:36:47.728562 master-0 kubenswrapper[7604]: I0309 16:36:47.728483 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:47.728562 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:47.728562 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:47.728562 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:47.729293 master-0 kubenswrapper[7604]: I0309 16:36:47.728586 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:48.111525 master-0 kubenswrapper[7604]: I0309 16:36:48.111252 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:36:48.111831 master-0 kubenswrapper[7604]: E0309 16:36:48.111573 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:36:48.729509 master-0 kubenswrapper[7604]: I0309 16:36:48.729379 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:48.729509 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:48.729509 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:48.729509 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:48.730467 master-0 kubenswrapper[7604]: I0309 16:36:48.729527 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:49.728821 master-0 kubenswrapper[7604]: I0309 16:36:49.728744 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:49.728821 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:49.728821 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:49.728821 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:49.729216 master-0 kubenswrapper[7604]: I0309 16:36:49.728826 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:49.877365 master-0 kubenswrapper[7604]: I0309 16:36:49.877253 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:36:50.728990 master-0 kubenswrapper[7604]: I0309 16:36:50.728880 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:50.728990 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:50.728990 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:50.728990 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:50.729468 master-0 kubenswrapper[7604]: I0309 16:36:50.728985 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:51.117514 master-0 kubenswrapper[7604]: I0309 16:36:51.117438 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/config-sync-controllers/0.log" Mar 09 16:36:51.118366 master-0 kubenswrapper[7604]: I0309 16:36:51.118283 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/cluster-cloud-controller-manager/0.log" Mar 09 16:36:51.118366 master-0 kubenswrapper[7604]: I0309 16:36:51.118358 7604 generic.go:334] "Generic (PLEG): container finished" podID="ea34ff7e-27fa-4c26-acc0-ec551985eb76" containerID="39d1c81df8c0e375db5e92a2da393b888f722383ebb7782e3b3f53c06fee366b" exitCode=1 Mar 09 16:36:51.119238 master-0 kubenswrapper[7604]: I0309 16:36:51.119144 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerDied","Data":"39d1c81df8c0e375db5e92a2da393b888f722383ebb7782e3b3f53c06fee366b"} Mar 09 16:36:51.119862 master-0 kubenswrapper[7604]: I0309 16:36:51.119823 7604 scope.go:117] "RemoveContainer" containerID="39d1c81df8c0e375db5e92a2da393b888f722383ebb7782e3b3f53c06fee366b" Mar 09 16:36:51.728700 master-0 kubenswrapper[7604]: I0309 16:36:51.728600 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:51.728700 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:51.728700 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:51.728700 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:51.728700 master-0 kubenswrapper[7604]: I0309 16:36:51.728708 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:52.141547 master-0 kubenswrapper[7604]: I0309 16:36:52.141332 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/config-sync-controllers/0.log" Mar 09 16:36:52.142284 master-0 kubenswrapper[7604]: I0309 16:36:52.141821 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/cluster-cloud-controller-manager/0.log" Mar 09 16:36:52.142284 master-0 kubenswrapper[7604]: I0309 16:36:52.141879 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" event={"ID":"ea34ff7e-27fa-4c26-acc0-ec551985eb76","Type":"ContainerStarted","Data":"99dc90ecbf51079f7123f41a9b0597c4f3d0ecedb642bc0b50ae07bbe0f1a015"} Mar 09 16:36:52.727491 master-0 kubenswrapper[7604]: I0309 16:36:52.727411 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:52.727491 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:52.727491 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:52.727491 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:52.727896 master-0 kubenswrapper[7604]: I0309 16:36:52.727506 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:53.609766 master-0 kubenswrapper[7604]: I0309 16:36:53.609719 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:36:53.729596 master-0 kubenswrapper[7604]: I0309 16:36:53.729408 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:53.729596 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:53.729596 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:53.729596 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:53.729596 master-0 kubenswrapper[7604]: I0309 16:36:53.729522 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:53.952027 master-0 kubenswrapper[7604]: E0309 16:36:53.951881 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:36:54.728516 master-0 kubenswrapper[7604]: I0309 16:36:54.728401 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:54.728516 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:54.728516 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:54.728516 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:54.729591 master-0 kubenswrapper[7604]: I0309 16:36:54.728533 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:55.732607 master-0 kubenswrapper[7604]: I0309 16:36:55.732469 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:55.732607 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:55.732607 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:55.732607 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:55.732607 master-0 kubenswrapper[7604]: I0309 16:36:55.732576 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:56.730259 master-0 kubenswrapper[7604]: I0309 16:36:56.730156 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:56.730259 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:56.730259 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:56.730259 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:56.730259 master-0 kubenswrapper[7604]: I0309 16:36:56.730249 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:56.869172 master-0 kubenswrapper[7604]: E0309 16:36:56.868920 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:36:56.875399 master-0 kubenswrapper[7604]: E0309 16:36:56.875209 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b397f0cf8519a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:34:56.141652378 +0000 UTC m=+553.195621821,LastTimestamp:2026-03-09 16:34:57.182485783 +0000 UTC m=+554.236455206,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:36:57.729803 master-0 kubenswrapper[7604]: I0309 16:36:57.729698 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:57.729803 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:57.729803 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:57.729803 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:57.730200 master-0 kubenswrapper[7604]: I0309 16:36:57.729831 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:58.728613 master-0 kubenswrapper[7604]: I0309 16:36:58.728497 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:58.728613 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:58.728613 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:58.728613 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:58.729353 master-0 kubenswrapper[7604]: I0309 16:36:58.728639 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:36:59.728840 master-0 kubenswrapper[7604]: I0309 16:36:59.728741 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:36:59.728840 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:36:59.728840 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:36:59.728840 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:36:59.728840 master-0 kubenswrapper[7604]: I0309 16:36:59.728842 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:00.728961 master-0 kubenswrapper[7604]: I0309 16:37:00.728877 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:00.728961 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:00.728961 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:00.728961 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:00.729906 master-0 kubenswrapper[7604]: I0309 16:37:00.729548 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:00.729906 master-0 kubenswrapper[7604]: I0309 16:37:00.729647 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:37:00.730510 master-0 kubenswrapper[7604]: I0309 16:37:00.730469 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a77339a51d7a4ed44c4a920634c7235e0fcfd324430f106c1b1bcdd4dc11bacc"} pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" containerMessage="Container router failed startup probe, will be restarted" Mar 09 16:37:00.730589 master-0 kubenswrapper[7604]: I0309 16:37:00.730516 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" containerID="cri-o://a77339a51d7a4ed44c4a920634c7235e0fcfd324430f106c1b1bcdd4dc11bacc" gracePeriod=3600 Mar 09 16:37:00.900927 master-0 kubenswrapper[7604]: E0309 16:37:00.900838 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 09 16:37:01.203712 master-0 kubenswrapper[7604]: I0309 16:37:01.203388 7604 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="3c0f8c5cb67d0f971149cd26dcfae78f198a52498c78f29a7fa53e12c2f891cd" exitCode=0 Mar 09 16:37:01.203712 master-0 kubenswrapper[7604]: I0309 16:37:01.203495 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"3c0f8c5cb67d0f971149cd26dcfae78f198a52498c78f29a7fa53e12c2f891cd"} Mar 09 16:37:01.204169 master-0 kubenswrapper[7604]: I0309 16:37:01.204123 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:37:01.204169 master-0 kubenswrapper[7604]: I0309 16:37:01.204157 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:37:02.110584 master-0 kubenswrapper[7604]: I0309 16:37:02.110519 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:37:02.111272 master-0 kubenswrapper[7604]: E0309 16:37:02.110765 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:37:06.870079 master-0 kubenswrapper[7604]: E0309 16:37:06.869969 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 09 16:37:10.953919 master-0 kubenswrapper[7604]: E0309 16:37:10.953830 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:37:11.304877 master-0 kubenswrapper[7604]: I0309 16:37:11.304815 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/1.log" Mar 09 16:37:11.305767 master-0 kubenswrapper[7604]: I0309 16:37:11.305705 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/0.log" Mar 09 16:37:11.305878 master-0 kubenswrapper[7604]: I0309 16:37:11.305844 7604 generic.go:334] "Generic (PLEG): container finished" podID="57036838-9f42-4ea1-a5c9-77f820cc22c9" containerID="c81334b89261d35be2255091a39d304d36bd86f871bd8f896eb0a73bdb6d3990" exitCode=1 Mar 09 16:37:11.305931 master-0 kubenswrapper[7604]: I0309 16:37:11.305910 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerDied","Data":"c81334b89261d35be2255091a39d304d36bd86f871bd8f896eb0a73bdb6d3990"} Mar 09 16:37:11.306004 master-0 kubenswrapper[7604]: I0309 16:37:11.305982 7604 scope.go:117] "RemoveContainer" containerID="ca16434e380b6db2be43284967084d34f8d84b54a570fafe10c2de9a729bf691" Mar 09 16:37:11.306803 master-0 kubenswrapper[7604]: I0309 16:37:11.306748 7604 scope.go:117] "RemoveContainer" containerID="c81334b89261d35be2255091a39d304d36bd86f871bd8f896eb0a73bdb6d3990" Mar 09 16:37:11.307099 master-0 kubenswrapper[7604]: E0309 16:37:11.307053 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-f594m_openshift-cluster-storage-operator(57036838-9f42-4ea1-a5c9-77f820cc22c9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podUID="57036838-9f42-4ea1-a5c9-77f820cc22c9" Mar 09 16:37:12.315684 master-0 kubenswrapper[7604]: I0309 16:37:12.315613 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/1.log" Mar 09 16:37:15.111568 master-0 kubenswrapper[7604]: I0309 16:37:15.111371 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:37:15.112329 master-0 kubenswrapper[7604]: E0309 16:37:15.111853 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:37:15.339899 master-0 kubenswrapper[7604]: I0309 16:37:15.339784 7604 generic.go:334] "Generic (PLEG): container finished" podID="e4895f22-8fcd-4ace-96d8-bc2e18a67891" containerID="127fddf033d016698d708311f1ce4a751f3a2f860d40130a5519cb0b6938e0a1" exitCode=0 Mar 09 16:37:15.339899 master-0 kubenswrapper[7604]: I0309 16:37:15.339855 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" event={"ID":"e4895f22-8fcd-4ace-96d8-bc2e18a67891","Type":"ContainerDied","Data":"127fddf033d016698d708311f1ce4a751f3a2f860d40130a5519cb0b6938e0a1"} Mar 09 16:37:15.340718 master-0 kubenswrapper[7604]: I0309 16:37:15.340684 7604 scope.go:117] "RemoveContainer" containerID="127fddf033d016698d708311f1ce4a751f3a2f860d40130a5519cb0b6938e0a1" Mar 09 16:37:16.353926 master-0 kubenswrapper[7604]: I0309 16:37:16.353831 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" event={"ID":"e4895f22-8fcd-4ace-96d8-bc2e18a67891","Type":"ContainerStarted","Data":"beaf1d61e0dcc6cd6d782081dca5292b2910f6eb4dd3f7e21ed3192deea169ff"} Mar 09 16:37:16.871341 master-0 kubenswrapper[7604]: E0309 16:37:16.871253 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:37:17.364471 master-0 kubenswrapper[7604]: I0309 16:37:17.364392 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-cvdzq_357570a4-f69b-4970-9b6f-fe06fc4c2f90/control-plane-machine-set-operator/0.log" Mar 09 16:37:17.365279 master-0 kubenswrapper[7604]: I0309 16:37:17.364617 7604 generic.go:334] "Generic (PLEG): container finished" podID="357570a4-f69b-4970-9b6f-fe06fc4c2f90" containerID="da01301d90c8ec36dd26e650eefd6003d2c0b759242bb4c2d47a570d6b83fec7" exitCode=1 Mar 09 16:37:17.365279 master-0 kubenswrapper[7604]: I0309 16:37:17.364750 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" event={"ID":"357570a4-f69b-4970-9b6f-fe06fc4c2f90","Type":"ContainerDied","Data":"da01301d90c8ec36dd26e650eefd6003d2c0b759242bb4c2d47a570d6b83fec7"} Mar 09 16:37:17.365690 master-0 kubenswrapper[7604]: I0309 16:37:17.365613 7604 scope.go:117] "RemoveContainer" containerID="da01301d90c8ec36dd26e650eefd6003d2c0b759242bb4c2d47a570d6b83fec7" Mar 09 16:37:17.367044 master-0 kubenswrapper[7604]: I0309 16:37:17.366849 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-pfbvg_3ec3050d-8e6f-466a-995a-f78270408a85/machine-approver-controller/0.log" Mar 09 16:37:17.367401 master-0 kubenswrapper[7604]: I0309 16:37:17.367352 7604 generic.go:334] "Generic (PLEG): container finished" podID="3ec3050d-8e6f-466a-995a-f78270408a85" containerID="2045c91c077228b5fc52cbacb88317be3538b9cb4ff34112c6659345b8d1fd77" exitCode=255 Mar 09 16:37:17.367547 master-0 kubenswrapper[7604]: I0309 16:37:17.367440 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" event={"ID":"3ec3050d-8e6f-466a-995a-f78270408a85","Type":"ContainerDied","Data":"2045c91c077228b5fc52cbacb88317be3538b9cb4ff34112c6659345b8d1fd77"} Mar 09 16:37:17.368742 master-0 kubenswrapper[7604]: I0309 16:37:17.368704 7604 scope.go:117] "RemoveContainer" containerID="2045c91c077228b5fc52cbacb88317be3538b9cb4ff34112c6659345b8d1fd77" Mar 09 16:37:17.369426 master-0 kubenswrapper[7604]: I0309 16:37:17.369391 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/0.log" Mar 09 16:37:17.369508 master-0 kubenswrapper[7604]: I0309 16:37:17.369447 7604 generic.go:334] "Generic (PLEG): container finished" podID="fa7f88a3-9845-49a3-a108-d524df592961" containerID="5d27613e5c07fed41355caf36a7da682d5655bd692c9fefa2418bf264de4dc45" exitCode=1 Mar 09 16:37:17.370186 master-0 kubenswrapper[7604]: I0309 16:37:17.369512 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerDied","Data":"5d27613e5c07fed41355caf36a7da682d5655bd692c9fefa2418bf264de4dc45"} Mar 09 16:37:17.370186 master-0 kubenswrapper[7604]: I0309 16:37:17.370069 7604 scope.go:117] "RemoveContainer" containerID="5d27613e5c07fed41355caf36a7da682d5655bd692c9fefa2418bf264de4dc45" Mar 09 16:37:18.384797 master-0 kubenswrapper[7604]: I0309 16:37:18.384738 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-pfbvg_3ec3050d-8e6f-466a-995a-f78270408a85/machine-approver-controller/0.log" Mar 09 16:37:18.385603 master-0 kubenswrapper[7604]: I0309 16:37:18.385479 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" event={"ID":"3ec3050d-8e6f-466a-995a-f78270408a85","Type":"ContainerStarted","Data":"ffb323a769c6d0a368b170af7d24202768f4c54baf6467c385d86fb0d37e29a2"} Mar 09 16:37:18.389391 master-0 kubenswrapper[7604]: I0309 16:37:18.389299 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/0.log" Mar 09 16:37:18.389759 master-0 kubenswrapper[7604]: I0309 16:37:18.389494 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerStarted","Data":"5e7be62db7c2ebff5b66de7a7333b7d5e3cfc65957eae64bbca9ae219287c419"} Mar 09 16:37:18.391996 master-0 kubenswrapper[7604]: I0309 16:37:18.391961 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-cvdzq_357570a4-f69b-4970-9b6f-fe06fc4c2f90/control-plane-machine-set-operator/0.log" Mar 09 16:37:18.392075 master-0 kubenswrapper[7604]: I0309 16:37:18.392014 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" event={"ID":"357570a4-f69b-4970-9b6f-fe06fc4c2f90","Type":"ContainerStarted","Data":"a3cb94f4048214397e58dfea57187737a8f9a282a3745e728d81b356320e9323"} Mar 09 16:37:23.429522 master-0 kubenswrapper[7604]: I0309 16:37:23.429446 7604 generic.go:334] "Generic (PLEG): container finished" podID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerID="103d3eac07aecf0258cc2c832ca414dc5ada6722c47422884569884c3c3f57fc" exitCode=0 Mar 09 16:37:23.429522 master-0 kubenswrapper[7604]: I0309 16:37:23.429473 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" event={"ID":"7d1143ae-d94a-43f2-8e75-95aae13a5c57","Type":"ContainerDied","Data":"103d3eac07aecf0258cc2c832ca414dc5ada6722c47422884569884c3c3f57fc"} Mar 09 16:37:23.430354 master-0 kubenswrapper[7604]: I0309 16:37:23.430086 7604 scope.go:117] "RemoveContainer" containerID="103d3eac07aecf0258cc2c832ca414dc5ada6722c47422884569884c3c3f57fc" Mar 09 16:37:24.111374 master-0 kubenswrapper[7604]: I0309 16:37:24.111287 7604 scope.go:117] "RemoveContainer" containerID="c81334b89261d35be2255091a39d304d36bd86f871bd8f896eb0a73bdb6d3990" Mar 09 16:37:24.439638 master-0 kubenswrapper[7604]: I0309 16:37:24.439483 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" event={"ID":"7d1143ae-d94a-43f2-8e75-95aae13a5c57","Type":"ContainerStarted","Data":"537fb6b643ee9cbd475ca32fcc8df6dda7f1359c900f2721924da0fedeca0866"} Mar 09 16:37:24.440369 master-0 kubenswrapper[7604]: I0309 16:37:24.440002 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:37:24.442366 master-0 kubenswrapper[7604]: I0309 16:37:24.442319 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/1.log" Mar 09 16:37:24.442536 master-0 kubenswrapper[7604]: I0309 16:37:24.442402 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerStarted","Data":"4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664"} Mar 09 16:37:24.446251 master-0 kubenswrapper[7604]: I0309 16:37:24.446203 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:37:26.872364 master-0 kubenswrapper[7604]: E0309 16:37:26.872244 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:37:26.872364 master-0 kubenswrapper[7604]: E0309 16:37:26.872325 7604 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 09 16:37:27.955923 master-0 kubenswrapper[7604]: E0309 16:37:27.955624 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:37:28.111491 master-0 kubenswrapper[7604]: I0309 16:37:28.111364 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:37:28.111954 master-0 kubenswrapper[7604]: E0309 16:37:28.111790 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:37:30.889480 master-0 kubenswrapper[7604]: E0309 16:37:30.888280 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{router-default-79f8cd6fdd-rvnwf.189b3959439c8b29 openshift-ingress 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-79f8cd6fdd-rvnwf,UID:73f1f0ba-f90e-45aa-b1ba-df011a5b9d56,APIVersion:v1,ResourceVersion:10598,FieldPath:spec.containers{router},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:32:13.849627433 +0000 UTC m=+390.903596856,LastTimestamp:2026-03-09 16:35:00.856611964 +0000 UTC m=+557.910581387,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:37:35.208745 master-0 kubenswrapper[7604]: E0309 16:37:35.208620 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 09 16:37:36.533266 master-0 kubenswrapper[7604]: I0309 16:37:36.533190 7604 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="9fcbc01d5f4782d9a43018a868b466e4526448de43e3cfccab2380f32946c687" exitCode=0 Mar 09 16:37:36.533266 master-0 kubenswrapper[7604]: I0309 16:37:36.533234 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"9fcbc01d5f4782d9a43018a868b466e4526448de43e3cfccab2380f32946c687"} Mar 09 16:37:36.534221 master-0 kubenswrapper[7604]: I0309 16:37:36.533716 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:37:36.534221 master-0 kubenswrapper[7604]: I0309 16:37:36.533735 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:37:41.019375 master-0 kubenswrapper[7604]: I0309 16:37:41.019267 7604 status_manager.go:851] "Failed to get status for pod" podUID="ea34ff7e-27fa-4c26-acc0-ec551985eb76" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-cloud-controller-manager-operator-7c8df9b496-zctw6)" Mar 09 16:37:41.111677 master-0 kubenswrapper[7604]: I0309 16:37:41.111567 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:37:41.112141 master-0 kubenswrapper[7604]: E0309 16:37:41.111940 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:37:44.608685 master-0 kubenswrapper[7604]: I0309 16:37:44.608620 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/3.log" Mar 09 16:37:44.609610 master-0 kubenswrapper[7604]: I0309 16:37:44.609563 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/2.log" Mar 09 16:37:44.610186 master-0 kubenswrapper[7604]: I0309 16:37:44.610132 7604 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" exitCode=1 Mar 09 16:37:44.610241 master-0 kubenswrapper[7604]: I0309 16:37:44.610196 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerDied","Data":"ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0"} Mar 09 16:37:44.610284 master-0 kubenswrapper[7604]: I0309 16:37:44.610258 7604 scope.go:117] "RemoveContainer" containerID="6b7e72ca08afdaca41526ec0161f7e73b9b6537e1fe65d53be23a9d92e58aa44" Mar 09 16:37:44.610853 master-0 kubenswrapper[7604]: I0309 16:37:44.610832 7604 scope.go:117] "RemoveContainer" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" Mar 09 16:37:44.611217 master-0 kubenswrapper[7604]: E0309 16:37:44.611194 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:37:44.956918 master-0 kubenswrapper[7604]: E0309 16:37:44.956791 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:37:45.618874 master-0 kubenswrapper[7604]: I0309 16:37:45.618807 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/3.log" Mar 09 16:37:47.641479 master-0 kubenswrapper[7604]: I0309 16:37:47.641374 7604 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="a77339a51d7a4ed44c4a920634c7235e0fcfd324430f106c1b1bcdd4dc11bacc" exitCode=0 Mar 09 16:37:47.641479 master-0 kubenswrapper[7604]: I0309 16:37:47.641445 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerDied","Data":"a77339a51d7a4ed44c4a920634c7235e0fcfd324430f106c1b1bcdd4dc11bacc"} Mar 09 16:37:47.642506 master-0 kubenswrapper[7604]: I0309 16:37:47.641541 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"4c1869d3a7ddcc58f5543caea28428d5b999484c891498c9388eaad6a5d85b10"} Mar 09 16:37:47.642506 master-0 kubenswrapper[7604]: I0309 16:37:47.641571 7604 scope.go:117] "RemoveContainer" containerID="116e02ef02114f2030248577cde62b42e1c5eea50c09ca56d92d93834a526424" Mar 09 16:37:47.726245 master-0 kubenswrapper[7604]: I0309 16:37:47.726158 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:37:47.728949 master-0 kubenswrapper[7604]: I0309 16:37:47.728896 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:47.728949 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:47.728949 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:47.728949 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:47.729209 master-0 kubenswrapper[7604]: I0309 16:37:47.729008 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:48.725719 master-0 kubenswrapper[7604]: I0309 16:37:48.725590 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:37:48.728546 master-0 kubenswrapper[7604]: I0309 16:37:48.728503 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:48.728546 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:48.728546 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:48.728546 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:48.728689 master-0 kubenswrapper[7604]: I0309 16:37:48.728587 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:49.727741 master-0 kubenswrapper[7604]: I0309 16:37:49.727673 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:49.727741 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:49.727741 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:49.727741 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:49.727741 master-0 kubenswrapper[7604]: I0309 16:37:49.727738 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:50.728970 master-0 kubenswrapper[7604]: I0309 16:37:50.728864 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:50.728970 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:50.728970 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:50.728970 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:50.729851 master-0 kubenswrapper[7604]: I0309 16:37:50.728989 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:51.729998 master-0 kubenswrapper[7604]: I0309 16:37:51.729894 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:51.729998 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:51.729998 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:51.729998 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:51.729998 master-0 kubenswrapper[7604]: I0309 16:37:51.729996 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:52.728635 master-0 kubenswrapper[7604]: I0309 16:37:52.728559 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:52.728635 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:52.728635 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:52.728635 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:52.728977 master-0 kubenswrapper[7604]: I0309 16:37:52.728657 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:53.111157 master-0 kubenswrapper[7604]: I0309 16:37:53.111012 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:37:53.692599 master-0 kubenswrapper[7604]: I0309 16:37:53.692546 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e"} Mar 09 16:37:53.728870 master-0 kubenswrapper[7604]: I0309 16:37:53.728743 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:53.728870 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:53.728870 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:53.728870 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:53.728870 master-0 kubenswrapper[7604]: I0309 16:37:53.728807 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:54.701835 master-0 kubenswrapper[7604]: I0309 16:37:54.701770 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/2.log" Mar 09 16:37:54.702381 master-0 kubenswrapper[7604]: I0309 16:37:54.702315 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/1.log" Mar 09 16:37:54.702381 master-0 kubenswrapper[7604]: I0309 16:37:54.702349 7604 generic.go:334] "Generic (PLEG): container finished" podID="57036838-9f42-4ea1-a5c9-77f820cc22c9" containerID="4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664" exitCode=1 Mar 09 16:37:54.702480 master-0 kubenswrapper[7604]: I0309 16:37:54.702380 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerDied","Data":"4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664"} Mar 09 16:37:54.702480 master-0 kubenswrapper[7604]: I0309 16:37:54.702413 7604 scope.go:117] "RemoveContainer" containerID="c81334b89261d35be2255091a39d304d36bd86f871bd8f896eb0a73bdb6d3990" Mar 09 16:37:54.703306 master-0 kubenswrapper[7604]: I0309 16:37:54.703129 7604 scope.go:117] "RemoveContainer" containerID="4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664" Mar 09 16:37:54.703596 master-0 kubenswrapper[7604]: E0309 16:37:54.703566 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-f594m_openshift-cluster-storage-operator(57036838-9f42-4ea1-a5c9-77f820cc22c9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podUID="57036838-9f42-4ea1-a5c9-77f820cc22c9" Mar 09 16:37:54.728825 master-0 kubenswrapper[7604]: I0309 16:37:54.728750 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:54.728825 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:54.728825 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:54.728825 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:54.728825 master-0 kubenswrapper[7604]: I0309 16:37:54.728818 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:55.056981 master-0 kubenswrapper[7604]: I0309 16:37:55.056823 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:37:55.712320 master-0 kubenswrapper[7604]: I0309 16:37:55.712269 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/2.log" Mar 09 16:37:55.730357 master-0 kubenswrapper[7604]: I0309 16:37:55.730315 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:55.730357 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:55.730357 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:55.730357 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:55.730795 master-0 kubenswrapper[7604]: I0309 16:37:55.730769 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:56.728246 master-0 kubenswrapper[7604]: I0309 16:37:56.728134 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:56.728246 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:56.728246 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:56.728246 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:56.728904 master-0 kubenswrapper[7604]: I0309 16:37:56.728876 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:57.110665 master-0 kubenswrapper[7604]: I0309 16:37:57.110554 7604 scope.go:117] "RemoveContainer" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" Mar 09 16:37:57.111258 master-0 kubenswrapper[7604]: E0309 16:37:57.111234 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:37:57.180930 master-0 kubenswrapper[7604]: I0309 16:37:57.180852 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:37:57.728170 master-0 kubenswrapper[7604]: I0309 16:37:57.728108 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:57.728170 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:57.728170 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:57.728170 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:57.728757 master-0 kubenswrapper[7604]: I0309 16:37:57.728195 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:58.057801 master-0 kubenswrapper[7604]: I0309 16:37:58.057539 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:37:58.727786 master-0 kubenswrapper[7604]: I0309 16:37:58.727678 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:58.727786 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:58.727786 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:58.727786 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:58.727786 master-0 kubenswrapper[7604]: I0309 16:37:58.727773 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:37:59.729853 master-0 kubenswrapper[7604]: I0309 16:37:59.729748 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:37:59.729853 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:37:59.729853 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:37:59.729853 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:37:59.729853 master-0 kubenswrapper[7604]: I0309 16:37:59.729850 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:00.728790 master-0 kubenswrapper[7604]: I0309 16:38:00.728707 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:00.728790 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:00.728790 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:00.728790 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:00.729172 master-0 kubenswrapper[7604]: I0309 16:38:00.728803 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:01.728390 master-0 kubenswrapper[7604]: I0309 16:38:01.728285 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:01.728390 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:01.728390 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:01.728390 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:01.728390 master-0 kubenswrapper[7604]: I0309 16:38:01.728391 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:01.958632 master-0 kubenswrapper[7604]: E0309 16:38:01.958510 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:38:02.728983 master-0 kubenswrapper[7604]: I0309 16:38:02.728911 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:02.728983 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:02.728983 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:02.728983 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:02.729686 master-0 kubenswrapper[7604]: I0309 16:38:02.729006 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:03.729616 master-0 kubenswrapper[7604]: I0309 16:38:03.729544 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:03.729616 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:03.729616 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:03.729616 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:03.730535 master-0 kubenswrapper[7604]: I0309 16:38:03.729654 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:04.728387 master-0 kubenswrapper[7604]: I0309 16:38:04.728278 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:04.728387 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:04.728387 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:04.728387 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:04.728387 master-0 kubenswrapper[7604]: I0309 16:38:04.728379 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:04.893387 master-0 kubenswrapper[7604]: E0309 16:38:04.893189 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b397f0cf8519a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:34:56.141652378 +0000 UTC m=+553.195621821,LastTimestamp:2026-03-09 16:35:02.915290719 +0000 UTC m=+559.969260142,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:38:05.728954 master-0 kubenswrapper[7604]: I0309 16:38:05.728899 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:05.728954 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:05.728954 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:05.728954 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:05.729271 master-0 kubenswrapper[7604]: I0309 16:38:05.728979 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:06.729342 master-0 kubenswrapper[7604]: I0309 16:38:06.729260 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:06.729342 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:06.729342 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:06.729342 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:06.730028 master-0 kubenswrapper[7604]: I0309 16:38:06.729377 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:07.727737 master-0 kubenswrapper[7604]: I0309 16:38:07.727662 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:07.727737 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:07.727737 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:07.727737 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:07.728737 master-0 kubenswrapper[7604]: I0309 16:38:07.727759 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:08.058355 master-0 kubenswrapper[7604]: I0309 16:38:08.058082 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:38:08.728579 master-0 kubenswrapper[7604]: I0309 16:38:08.728503 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:08.728579 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:08.728579 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:08.728579 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:08.729013 master-0 kubenswrapper[7604]: I0309 16:38:08.728985 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:09.111932 master-0 kubenswrapper[7604]: I0309 16:38:09.111708 7604 scope.go:117] "RemoveContainer" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" Mar 09 16:38:09.111932 master-0 kubenswrapper[7604]: I0309 16:38:09.111876 7604 scope.go:117] "RemoveContainer" containerID="4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664" Mar 09 16:38:09.112882 master-0 kubenswrapper[7604]: E0309 16:38:09.112057 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:38:09.112882 master-0 kubenswrapper[7604]: E0309 16:38:09.112782 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-f594m_openshift-cluster-storage-operator(57036838-9f42-4ea1-a5c9-77f820cc22c9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podUID="57036838-9f42-4ea1-a5c9-77f820cc22c9" Mar 09 16:38:09.728577 master-0 kubenswrapper[7604]: I0309 16:38:09.728472 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:09.728577 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:09.728577 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:09.728577 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:09.729166 master-0 kubenswrapper[7604]: I0309 16:38:09.728587 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:10.537596 master-0 kubenswrapper[7604]: E0309 16:38:10.537497 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 09 16:38:10.728717 master-0 kubenswrapper[7604]: I0309 16:38:10.728556 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:10.728717 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:10.728717 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:10.728717 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:10.728717 master-0 kubenswrapper[7604]: I0309 16:38:10.728634 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:10.817869 master-0 kubenswrapper[7604]: I0309 16:38:10.817803 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6eb340f9829999c7cc79c3d03f217ced767a38d4b0f77e9249276c39cb95fddd"} Mar 09 16:38:11.729158 master-0 kubenswrapper[7604]: I0309 16:38:11.729119 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:11.729158 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:11.729158 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:11.729158 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:11.729741 master-0 kubenswrapper[7604]: I0309 16:38:11.729173 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:11.830955 master-0 kubenswrapper[7604]: I0309 16:38:11.830771 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d0837c89dd7d5c29cb3a16a4172f82ba252bd96283dd17c4859c983ffbc4a953"} Mar 09 16:38:11.830955 master-0 kubenswrapper[7604]: I0309 16:38:11.830827 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"a81141219f32b726e278b6a94c2bf45a46404948e70612df477a68ae817250cb"} Mar 09 16:38:11.830955 master-0 kubenswrapper[7604]: I0309 16:38:11.830837 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"9b13491263a5d4609f4ed6efa05d90c0afd38b93af0c6748cf255f4f0ae9a67f"} Mar 09 16:38:11.830955 master-0 kubenswrapper[7604]: I0309 16:38:11.830848 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"add3696dadb79923d056772ab2d07a81596271dc33777dc0c6ae81fec3a9d5b4"} Mar 09 16:38:11.831332 master-0 kubenswrapper[7604]: I0309 16:38:11.831109 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:38:11.831332 master-0 kubenswrapper[7604]: I0309 16:38:11.831125 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:38:12.138640 master-0 kubenswrapper[7604]: I0309 16:38:12.138488 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 09 16:38:12.138640 master-0 kubenswrapper[7604]: I0309 16:38:12.138559 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 09 16:38:12.728747 master-0 kubenswrapper[7604]: I0309 16:38:12.728595 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:12.728747 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:12.728747 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:12.728747 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:12.728747 master-0 kubenswrapper[7604]: I0309 16:38:12.728668 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:13.728571 master-0 kubenswrapper[7604]: I0309 16:38:13.728350 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:13.728571 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:13.728571 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:13.728571 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:13.728571 master-0 kubenswrapper[7604]: I0309 16:38:13.728449 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:14.727737 master-0 kubenswrapper[7604]: I0309 16:38:14.727657 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:14.727737 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:14.727737 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:14.727737 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:14.728370 master-0 kubenswrapper[7604]: I0309 16:38:14.727738 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:15.728791 master-0 kubenswrapper[7604]: I0309 16:38:15.728733 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:15.728791 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:15.728791 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:15.728791 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:15.729499 master-0 kubenswrapper[7604]: I0309 16:38:15.728809 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:16.727924 master-0 kubenswrapper[7604]: I0309 16:38:16.727856 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:16.727924 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:16.727924 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:16.727924 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:16.728393 master-0 kubenswrapper[7604]: I0309 16:38:16.727944 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:17.728835 master-0 kubenswrapper[7604]: I0309 16:38:17.728728 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:17.728835 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:17.728835 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:17.728835 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:17.728835 master-0 kubenswrapper[7604]: I0309 16:38:17.728828 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:17.870983 master-0 kubenswrapper[7604]: I0309 16:38:17.870898 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/1.log" Mar 09 16:38:17.872075 master-0 kubenswrapper[7604]: I0309 16:38:17.872023 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/0.log" Mar 09 16:38:17.872345 master-0 kubenswrapper[7604]: I0309 16:38:17.872143 7604 generic.go:334] "Generic (PLEG): container finished" podID="fa7f88a3-9845-49a3-a108-d524df592961" containerID="5e7be62db7c2ebff5b66de7a7333b7d5e3cfc65957eae64bbca9ae219287c419" exitCode=1 Mar 09 16:38:17.872345 master-0 kubenswrapper[7604]: I0309 16:38:17.872184 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerDied","Data":"5e7be62db7c2ebff5b66de7a7333b7d5e3cfc65957eae64bbca9ae219287c419"} Mar 09 16:38:17.872345 master-0 kubenswrapper[7604]: I0309 16:38:17.872226 7604 scope.go:117] "RemoveContainer" containerID="5d27613e5c07fed41355caf36a7da682d5655bd692c9fefa2418bf264de4dc45" Mar 09 16:38:17.874744 master-0 kubenswrapper[7604]: I0309 16:38:17.874694 7604 scope.go:117] "RemoveContainer" containerID="5e7be62db7c2ebff5b66de7a7333b7d5e3cfc65957eae64bbca9ae219287c419" Mar 09 16:38:17.875020 master-0 kubenswrapper[7604]: E0309 16:38:17.874987 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-p27tf_openshift-machine-api(fa7f88a3-9845-49a3-a108-d524df592961)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" podUID="fa7f88a3-9845-49a3-a108-d524df592961" Mar 09 16:38:18.057584 master-0 kubenswrapper[7604]: I0309 16:38:18.057478 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:38:18.057584 master-0 kubenswrapper[7604]: I0309 16:38:18.057586 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:38:18.058177 master-0 kubenswrapper[7604]: I0309 16:38:18.058133 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 09 16:38:18.058268 master-0 kubenswrapper[7604]: I0309 16:38:18.058197 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" gracePeriod=30 Mar 09 16:38:18.209718 master-0 kubenswrapper[7604]: E0309 16:38:18.209673 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:38:18.729321 master-0 kubenswrapper[7604]: I0309 16:38:18.729196 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:18.729321 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:18.729321 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:18.729321 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:18.730062 master-0 kubenswrapper[7604]: I0309 16:38:18.729363 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:18.888682 master-0 kubenswrapper[7604]: I0309 16:38:18.888565 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" exitCode=2 Mar 09 16:38:18.888682 master-0 kubenswrapper[7604]: I0309 16:38:18.888661 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e"} Mar 09 16:38:18.889134 master-0 kubenswrapper[7604]: I0309 16:38:18.888743 7604 scope.go:117] "RemoveContainer" containerID="32530f01bacf8fc9fd7c10e7dc4c11df096c5de86ec99a00464c346e8d519de6" Mar 09 16:38:18.889913 master-0 kubenswrapper[7604]: I0309 16:38:18.889843 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:38:18.890272 master-0 kubenswrapper[7604]: E0309 16:38:18.890226 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:38:18.891685 master-0 kubenswrapper[7604]: I0309 16:38:18.891643 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/1.log" Mar 09 16:38:18.960650 master-0 kubenswrapper[7604]: E0309 16:38:18.960567 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:38:19.727845 master-0 kubenswrapper[7604]: I0309 16:38:19.727755 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:19.727845 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:19.727845 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:19.727845 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:19.727845 master-0 kubenswrapper[7604]: I0309 16:38:19.727851 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:20.729323 master-0 kubenswrapper[7604]: I0309 16:38:20.729233 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:20.729323 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:20.729323 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:20.729323 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:20.730215 master-0 kubenswrapper[7604]: I0309 16:38:20.729335 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:21.729531 master-0 kubenswrapper[7604]: I0309 16:38:21.729403 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:21.729531 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:21.729531 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:21.729531 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:21.730173 master-0 kubenswrapper[7604]: I0309 16:38:21.729601 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:22.171565 master-0 kubenswrapper[7604]: I0309 16:38:22.171323 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 09 16:38:22.729318 master-0 kubenswrapper[7604]: I0309 16:38:22.729215 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:22.729318 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:22.729318 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:22.729318 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:22.729318 master-0 kubenswrapper[7604]: I0309 16:38:22.729282 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:22.913742 master-0 kubenswrapper[7604]: I0309 16:38:22.913676 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:38:22.914347 master-0 kubenswrapper[7604]: I0309 16:38:22.914319 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:38:22.914774 master-0 kubenswrapper[7604]: E0309 16:38:22.914731 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:38:23.111498 master-0 kubenswrapper[7604]: I0309 16:38:23.111335 7604 scope.go:117] "RemoveContainer" containerID="4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664" Mar 09 16:38:23.111498 master-0 kubenswrapper[7604]: I0309 16:38:23.111464 7604 scope.go:117] "RemoveContainer" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" Mar 09 16:38:23.111809 master-0 kubenswrapper[7604]: E0309 16:38:23.111774 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:38:23.728821 master-0 kubenswrapper[7604]: I0309 16:38:23.728714 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:23.728821 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:23.728821 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:23.728821 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:23.728821 master-0 kubenswrapper[7604]: I0309 16:38:23.728829 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:23.931631 master-0 kubenswrapper[7604]: I0309 16:38:23.931571 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/2.log" Mar 09 16:38:23.932093 master-0 kubenswrapper[7604]: I0309 16:38:23.931648 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerStarted","Data":"76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd"} Mar 09 16:38:24.729291 master-0 kubenswrapper[7604]: I0309 16:38:24.729118 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:24.729291 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:24.729291 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:24.729291 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:24.729291 master-0 kubenswrapper[7604]: I0309 16:38:24.729266 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:25.727697 master-0 kubenswrapper[7604]: I0309 16:38:25.727613 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:25.727697 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:25.727697 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:25.727697 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:25.727697 master-0 kubenswrapper[7604]: I0309 16:38:25.727674 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:26.729400 master-0 kubenswrapper[7604]: I0309 16:38:26.729314 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:26.729400 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:26.729400 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:26.729400 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:26.730199 master-0 kubenswrapper[7604]: I0309 16:38:26.729443 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:27.153908 master-0 kubenswrapper[7604]: I0309 16:38:27.153749 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 09 16:38:27.728754 master-0 kubenswrapper[7604]: I0309 16:38:27.728659 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:27.728754 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:27.728754 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:27.728754 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:27.728754 master-0 kubenswrapper[7604]: I0309 16:38:27.728747 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:28.729021 master-0 kubenswrapper[7604]: I0309 16:38:28.728746 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:28.729021 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:28.729021 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:28.729021 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:28.729021 master-0 kubenswrapper[7604]: I0309 16:38:28.728848 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:29.728985 master-0 kubenswrapper[7604]: I0309 16:38:29.728896 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:29.728985 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:29.728985 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:29.728985 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:29.730178 master-0 kubenswrapper[7604]: I0309 16:38:29.729000 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:30.728750 master-0 kubenswrapper[7604]: I0309 16:38:30.728641 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:30.728750 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:30.728750 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:30.728750 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:30.729337 master-0 kubenswrapper[7604]: I0309 16:38:30.728774 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:31.111961 master-0 kubenswrapper[7604]: I0309 16:38:31.111777 7604 scope.go:117] "RemoveContainer" containerID="5e7be62db7c2ebff5b66de7a7333b7d5e3cfc65957eae64bbca9ae219287c419" Mar 09 16:38:31.727984 master-0 kubenswrapper[7604]: I0309 16:38:31.727733 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:31.727984 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:31.727984 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:31.727984 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:31.727984 master-0 kubenswrapper[7604]: I0309 16:38:31.727818 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:31.999413 master-0 kubenswrapper[7604]: I0309 16:38:31.999245 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/1.log" Mar 09 16:38:32.000029 master-0 kubenswrapper[7604]: I0309 16:38:31.999947 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" event={"ID":"fa7f88a3-9845-49a3-a108-d524df592961","Type":"ContainerStarted","Data":"640a2ad552347d07187eb373451d82c6a7a6c63c8a85df43da20084dc26aad24"} Mar 09 16:38:32.729008 master-0 kubenswrapper[7604]: I0309 16:38:32.728917 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:32.729008 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:32.729008 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:32.729008 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:32.729860 master-0 kubenswrapper[7604]: I0309 16:38:32.729040 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:33.730161 master-0 kubenswrapper[7604]: I0309 16:38:33.730037 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:33.730161 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:33.730161 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:33.730161 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:33.730973 master-0 kubenswrapper[7604]: I0309 16:38:33.730231 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:34.728757 master-0 kubenswrapper[7604]: I0309 16:38:34.728667 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:34.728757 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:34.728757 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:34.728757 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:34.729099 master-0 kubenswrapper[7604]: I0309 16:38:34.728793 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:35.729385 master-0 kubenswrapper[7604]: I0309 16:38:35.729301 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:35.729385 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:35.729385 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:35.729385 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:35.730391 master-0 kubenswrapper[7604]: I0309 16:38:35.729384 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:35.962154 master-0 kubenswrapper[7604]: E0309 16:38:35.961988 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:38:36.729855 master-0 kubenswrapper[7604]: I0309 16:38:36.729777 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:36.729855 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:36.729855 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:36.729855 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:36.731138 master-0 kubenswrapper[7604]: I0309 16:38:36.729873 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:37.727802 master-0 kubenswrapper[7604]: I0309 16:38:37.727699 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:37.727802 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:37.727802 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:37.727802 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:37.728220 master-0 kubenswrapper[7604]: I0309 16:38:37.728194 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:38.111102 master-0 kubenswrapper[7604]: I0309 16:38:38.110954 7604 scope.go:117] "RemoveContainer" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" Mar 09 16:38:38.112099 master-0 kubenswrapper[7604]: I0309 16:38:38.111349 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:38:38.113025 master-0 kubenswrapper[7604]: E0309 16:38:38.112121 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:38:38.727623 master-0 kubenswrapper[7604]: I0309 16:38:38.727559 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:38.727623 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:38.727623 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:38.727623 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:38.727922 master-0 kubenswrapper[7604]: I0309 16:38:38.727633 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:38.896766 master-0 kubenswrapper[7604]: E0309 16:38:38.896632 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189b397f0cf8519a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:34:56.141652378 +0000 UTC m=+553.195621821,LastTimestamp:2026-03-09 16:35:05.058677758 +0000 UTC m=+562.112647181,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:38:39.044401 master-0 kubenswrapper[7604]: I0309 16:38:39.044294 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/3.log" Mar 09 16:38:39.044916 master-0 kubenswrapper[7604]: I0309 16:38:39.044872 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0"} Mar 09 16:38:39.728932 master-0 kubenswrapper[7604]: I0309 16:38:39.728873 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:39.728932 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:39.728932 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:39.728932 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:39.729640 master-0 kubenswrapper[7604]: I0309 16:38:39.728944 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:40.728305 master-0 kubenswrapper[7604]: I0309 16:38:40.728137 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:40.728305 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:40.728305 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:40.728305 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:40.728305 master-0 kubenswrapper[7604]: I0309 16:38:40.728302 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:41.020552 master-0 kubenswrapper[7604]: I0309 16:38:41.020376 7604 status_manager.go:851] "Failed to get status for pod" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 09 16:38:41.728664 master-0 kubenswrapper[7604]: I0309 16:38:41.728595 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:41.728664 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:41.728664 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:41.728664 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:41.728946 master-0 kubenswrapper[7604]: I0309 16:38:41.728676 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:42.728186 master-0 kubenswrapper[7604]: I0309 16:38:42.728116 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:42.728186 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:42.728186 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:42.728186 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:42.729151 master-0 kubenswrapper[7604]: I0309 16:38:42.728215 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:43.728033 master-0 kubenswrapper[7604]: I0309 16:38:43.727965 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:43.728033 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:43.728033 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:43.728033 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:43.728033 master-0 kubenswrapper[7604]: I0309 16:38:43.728024 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:44.728761 master-0 kubenswrapper[7604]: I0309 16:38:44.728661 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:44.728761 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:44.728761 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:44.728761 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:44.729836 master-0 kubenswrapper[7604]: I0309 16:38:44.728804 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:45.728538 master-0 kubenswrapper[7604]: I0309 16:38:45.728486 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:45.728538 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:45.728538 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:45.728538 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:45.729192 master-0 kubenswrapper[7604]: I0309 16:38:45.728556 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:45.833335 master-0 kubenswrapper[7604]: E0309 16:38:45.833268 7604 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 09 16:38:46.100120 master-0 kubenswrapper[7604]: I0309 16:38:46.099972 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:38:46.100120 master-0 kubenswrapper[7604]: I0309 16:38:46.100019 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:38:46.728308 master-0 kubenswrapper[7604]: I0309 16:38:46.728230 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:46.728308 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:46.728308 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:46.728308 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:46.728701 master-0 kubenswrapper[7604]: I0309 16:38:46.728309 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:47.728150 master-0 kubenswrapper[7604]: I0309 16:38:47.728110 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:47.728150 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:47.728150 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:47.728150 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:47.728959 master-0 kubenswrapper[7604]: I0309 16:38:47.728926 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:48.727917 master-0 kubenswrapper[7604]: I0309 16:38:48.727851 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:48.727917 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:48.727917 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:48.727917 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:48.728976 master-0 kubenswrapper[7604]: I0309 16:38:48.727920 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:49.111286 master-0 kubenswrapper[7604]: I0309 16:38:49.110974 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:38:49.111286 master-0 kubenswrapper[7604]: E0309 16:38:49.111257 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:38:49.728742 master-0 kubenswrapper[7604]: I0309 16:38:49.728655 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:49.728742 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:49.728742 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:49.728742 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:49.728742 master-0 kubenswrapper[7604]: I0309 16:38:49.728733 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:50.728925 master-0 kubenswrapper[7604]: I0309 16:38:50.728852 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:50.728925 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:50.728925 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:50.728925 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:50.729658 master-0 kubenswrapper[7604]: I0309 16:38:50.728931 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:51.728271 master-0 kubenswrapper[7604]: I0309 16:38:51.728187 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:51.728271 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:51.728271 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:51.728271 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:51.728610 master-0 kubenswrapper[7604]: I0309 16:38:51.728303 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:52.731615 master-0 kubenswrapper[7604]: I0309 16:38:52.731551 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:52.731615 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:52.731615 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:52.731615 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:52.732338 master-0 kubenswrapper[7604]: I0309 16:38:52.731640 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:52.964272 master-0 kubenswrapper[7604]: E0309 16:38:52.964122 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:38:53.729526 master-0 kubenswrapper[7604]: I0309 16:38:53.729403 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:53.729526 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:53.729526 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:53.729526 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:53.729887 master-0 kubenswrapper[7604]: I0309 16:38:53.729534 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:54.150286 master-0 kubenswrapper[7604]: I0309 16:38:54.150078 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/3.log" Mar 09 16:38:54.151410 master-0 kubenswrapper[7604]: I0309 16:38:54.150622 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/2.log" Mar 09 16:38:54.151410 master-0 kubenswrapper[7604]: I0309 16:38:54.150670 7604 generic.go:334] "Generic (PLEG): container finished" podID="57036838-9f42-4ea1-a5c9-77f820cc22c9" containerID="76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd" exitCode=1 Mar 09 16:38:54.151410 master-0 kubenswrapper[7604]: I0309 16:38:54.150708 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerDied","Data":"76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd"} Mar 09 16:38:54.151410 master-0 kubenswrapper[7604]: I0309 16:38:54.150760 7604 scope.go:117] "RemoveContainer" containerID="4325b5dfa4521d3e77a3efbefa475bf4912314f21c97e00b6f38df68b30ac664" Mar 09 16:38:54.151601 master-0 kubenswrapper[7604]: I0309 16:38:54.151585 7604 scope.go:117] "RemoveContainer" containerID="76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd" Mar 09 16:38:54.151895 master-0 kubenswrapper[7604]: E0309 16:38:54.151851 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-f594m_openshift-cluster-storage-operator(57036838-9f42-4ea1-a5c9-77f820cc22c9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podUID="57036838-9f42-4ea1-a5c9-77f820cc22c9" Mar 09 16:38:54.728242 master-0 kubenswrapper[7604]: I0309 16:38:54.728170 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:54.728242 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:54.728242 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:54.728242 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:54.728574 master-0 kubenswrapper[7604]: I0309 16:38:54.728251 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:55.162223 master-0 kubenswrapper[7604]: I0309 16:38:55.162022 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/3.log" Mar 09 16:38:55.729249 master-0 kubenswrapper[7604]: I0309 16:38:55.729158 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:55.729249 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:55.729249 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:55.729249 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:55.729249 master-0 kubenswrapper[7604]: I0309 16:38:55.729248 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:56.728660 master-0 kubenswrapper[7604]: I0309 16:38:56.728560 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:56.728660 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:56.728660 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:56.728660 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:56.729531 master-0 kubenswrapper[7604]: I0309 16:38:56.728671 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:57.728674 master-0 kubenswrapper[7604]: I0309 16:38:57.728559 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:57.728674 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:57.728674 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:57.728674 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:57.729592 master-0 kubenswrapper[7604]: I0309 16:38:57.728705 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:58.728460 master-0 kubenswrapper[7604]: I0309 16:38:58.728359 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:58.728460 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:58.728460 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:58.728460 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:58.729312 master-0 kubenswrapper[7604]: I0309 16:38:58.728475 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:38:59.728638 master-0 kubenswrapper[7604]: I0309 16:38:59.728543 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:38:59.728638 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:38:59.728638 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:38:59.728638 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:38:59.728638 master-0 kubenswrapper[7604]: I0309 16:38:59.728628 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:00.111890 master-0 kubenswrapper[7604]: I0309 16:39:00.111747 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:00.112107 master-0 kubenswrapper[7604]: E0309 16:39:00.112066 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:00.729903 master-0 kubenswrapper[7604]: I0309 16:39:00.729775 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:00.729903 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:00.729903 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:00.729903 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:00.730468 master-0 kubenswrapper[7604]: I0309 16:39:00.729908 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:01.727647 master-0 kubenswrapper[7604]: I0309 16:39:01.727582 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:01.727647 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:01.727647 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:01.727647 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:01.727969 master-0 kubenswrapper[7604]: I0309 16:39:01.727656 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:02.728562 master-0 kubenswrapper[7604]: I0309 16:39:02.728391 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:02.728562 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:02.728562 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:02.728562 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:02.728562 master-0 kubenswrapper[7604]: I0309 16:39:02.728564 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:03.728026 master-0 kubenswrapper[7604]: I0309 16:39:03.727978 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:03.728026 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:03.728026 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:03.728026 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:03.728490 master-0 kubenswrapper[7604]: I0309 16:39:03.728417 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:04.728197 master-0 kubenswrapper[7604]: I0309 16:39:04.728127 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:04.728197 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:04.728197 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:04.728197 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:04.728834 master-0 kubenswrapper[7604]: I0309 16:39:04.728212 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:05.727475 master-0 kubenswrapper[7604]: I0309 16:39:05.727420 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:05.727475 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:05.727475 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:05.727475 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:05.727924 master-0 kubenswrapper[7604]: I0309 16:39:05.727895 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:06.728297 master-0 kubenswrapper[7604]: I0309 16:39:06.728239 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:06.728297 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:06.728297 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:06.728297 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:06.729018 master-0 kubenswrapper[7604]: I0309 16:39:06.728315 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:07.728848 master-0 kubenswrapper[7604]: I0309 16:39:07.728670 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:07.728848 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:07.728848 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:07.728848 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:07.728848 master-0 kubenswrapper[7604]: I0309 16:39:07.728793 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:08.728099 master-0 kubenswrapper[7604]: I0309 16:39:08.728024 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:08.728099 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:08.728099 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:08.728099 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:08.728392 master-0 kubenswrapper[7604]: I0309 16:39:08.728101 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:08.808046 master-0 kubenswrapper[7604]: E0309 16:39:08.807930 7604 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:38:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:38:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:38:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-09T16:38:58Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 09 16:39:09.011039 master-0 kubenswrapper[7604]: I0309 16:39:09.010871 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:39:09.114554 master-0 kubenswrapper[7604]: I0309 16:39:09.114487 7604 scope.go:117] "RemoveContainer" containerID="76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd" Mar 09 16:39:09.114781 master-0 kubenswrapper[7604]: E0309 16:39:09.114756 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-f594m_openshift-cluster-storage-operator(57036838-9f42-4ea1-a5c9-77f820cc22c9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podUID="57036838-9f42-4ea1-a5c9-77f820cc22c9" Mar 09 16:39:09.128322 master-0 kubenswrapper[7604]: I0309 16:39:09.127089 7604 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 09 16:39:09.131844 master-0 kubenswrapper[7604]: I0309 16:39:09.131773 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:39:09.196330 master-0 kubenswrapper[7604]: I0309 16:39:09.195554 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 09 16:39:09.261335 master-0 kubenswrapper[7604]: I0309 16:39:09.261151 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:39:09.261335 master-0 kubenswrapper[7604]: I0309 16:39:09.261193 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2e5e3bef-337f-40c0-a763-9d2fe46ef44d" Mar 09 16:39:09.552733 master-0 kubenswrapper[7604]: I0309 16:39:09.552051 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=270.552029717 podStartE2EDuration="4m30.552029717s" podCreationTimestamp="2026-03-09 16:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:39:09.550687668 +0000 UTC m=+806.604657111" watchObservedRunningTime="2026-03-09 16:39:09.552029717 +0000 UTC m=+806.605999140" Mar 09 16:39:09.654453 master-0 kubenswrapper[7604]: I0309 16:39:09.652832 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 09 16:39:09.662061 master-0 kubenswrapper[7604]: I0309 16:39:09.661854 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 09 16:39:09.727957 master-0 kubenswrapper[7604]: I0309 16:39:09.727922 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:09.727957 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:09.727957 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:09.727957 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:09.728279 master-0 kubenswrapper[7604]: I0309 16:39:09.728258 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:09.761025 master-0 kubenswrapper[7604]: I0309 16:39:09.760514 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.760493552 podStartE2EDuration="760.493552ms" podCreationTimestamp="2026-03-09 16:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:39:09.758057713 +0000 UTC m=+806.812027156" watchObservedRunningTime="2026-03-09 16:39:09.760493552 +0000 UTC m=+806.814462985" Mar 09 16:39:09.965021 master-0 kubenswrapper[7604]: E0309 16:39:09.964904 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 09 16:39:10.730974 master-0 kubenswrapper[7604]: I0309 16:39:10.728199 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:10.730974 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:10.730974 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:10.730974 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:10.730974 master-0 kubenswrapper[7604]: I0309 16:39:10.728280 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:11.118595 master-0 kubenswrapper[7604]: I0309 16:39:11.118468 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d4d5a2-1544-4443-acc5-d7eee167a29c" path="/var/lib/kubelet/pods/84d4d5a2-1544-4443-acc5-d7eee167a29c/volumes" Mar 09 16:39:11.728907 master-0 kubenswrapper[7604]: I0309 16:39:11.728788 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:11.728907 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:11.728907 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:11.728907 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:11.728907 master-0 kubenswrapper[7604]: I0309 16:39:11.728870 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:12.729046 master-0 kubenswrapper[7604]: I0309 16:39:12.728722 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:12.729046 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:12.729046 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:12.729046 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:12.729046 master-0 kubenswrapper[7604]: I0309 16:39:12.728819 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:12.900635 master-0 kubenswrapper[7604]: E0309 16:39:12.900448 7604 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189b390fe91c33cf kube-system 8213 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:26:58 +0000 UTC,LastTimestamp:2026-03-09 16:35:10.113257319 +0000 UTC m=+567.167226752,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:39:13.726962 master-0 kubenswrapper[7604]: I0309 16:39:13.726910 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:13.726962 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:13.726962 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:13.726962 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:13.727259 master-0 kubenswrapper[7604]: I0309 16:39:13.726968 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:14.729203 master-0 kubenswrapper[7604]: I0309 16:39:14.729111 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:14.729203 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:14.729203 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:14.729203 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:14.729203 master-0 kubenswrapper[7604]: I0309 16:39:14.729198 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:15.111190 master-0 kubenswrapper[7604]: I0309 16:39:15.111032 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:15.111405 master-0 kubenswrapper[7604]: E0309 16:39:15.111350 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:15.727823 master-0 kubenswrapper[7604]: I0309 16:39:15.727748 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:15.727823 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:15.727823 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:15.727823 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:15.728124 master-0 kubenswrapper[7604]: I0309 16:39:15.727835 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:16.728624 master-0 kubenswrapper[7604]: I0309 16:39:16.728548 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:16.728624 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:16.728624 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:16.728624 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:16.729395 master-0 kubenswrapper[7604]: I0309 16:39:16.728655 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:17.728667 master-0 kubenswrapper[7604]: I0309 16:39:17.728584 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:17.728667 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:17.728667 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:17.728667 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:17.728667 master-0 kubenswrapper[7604]: I0309 16:39:17.728692 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:18.729410 master-0 kubenswrapper[7604]: I0309 16:39:18.729219 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:18.729410 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:18.729410 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:18.729410 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:18.730293 master-0 kubenswrapper[7604]: I0309 16:39:18.729400 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:19.728840 master-0 kubenswrapper[7604]: I0309 16:39:19.728693 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:19.728840 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:19.728840 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:19.728840 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:19.729490 master-0 kubenswrapper[7604]: I0309 16:39:19.728843 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:20.729969 master-0 kubenswrapper[7604]: I0309 16:39:20.729797 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:20.729969 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:20.729969 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:20.729969 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:20.730903 master-0 kubenswrapper[7604]: I0309 16:39:20.729980 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:21.729685 master-0 kubenswrapper[7604]: I0309 16:39:21.729386 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:21.729685 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:21.729685 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:21.729685 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:21.729685 master-0 kubenswrapper[7604]: I0309 16:39:21.729511 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:22.729208 master-0 kubenswrapper[7604]: I0309 16:39:22.729125 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:22.729208 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:22.729208 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:22.729208 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:22.729788 master-0 kubenswrapper[7604]: I0309 16:39:22.729233 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:23.868633 master-0 kubenswrapper[7604]: I0309 16:39:23.116555 7604 scope.go:117] "RemoveContainer" containerID="76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd" Mar 09 16:39:23.868633 master-0 kubenswrapper[7604]: E0309 16:39:23.116860 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-f594m_openshift-cluster-storage-operator(57036838-9f42-4ea1-a5c9-77f820cc22c9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" podUID="57036838-9f42-4ea1-a5c9-77f820cc22c9" Mar 09 16:39:23.873744 master-0 kubenswrapper[7604]: I0309 16:39:23.873348 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:23.873744 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:23.873744 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:23.873744 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:23.873744 master-0 kubenswrapper[7604]: I0309 16:39:23.873500 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:24.365356 master-0 kubenswrapper[7604]: I0309 16:39:24.365226 7604 generic.go:334] "Generic (PLEG): container finished" podID="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" containerID="0890855b3b5026503838ed97808495935321e600acd88d8055621af6b2d87521" exitCode=0 Mar 09 16:39:24.365356 master-0 kubenswrapper[7604]: I0309 16:39:24.365270 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" event={"ID":"6c4dfdcc-e182-4831-98e4-1eedb069bcf6","Type":"ContainerDied","Data":"0890855b3b5026503838ed97808495935321e600acd88d8055621af6b2d87521"} Mar 09 16:39:24.365356 master-0 kubenswrapper[7604]: I0309 16:39:24.365359 7604 scope.go:117] "RemoveContainer" containerID="7c3fbf08ff6da10a25d918bd4cbabfd4c79ce8ba8a9c8a411b80c1c351bae8a7" Mar 09 16:39:24.366240 master-0 kubenswrapper[7604]: I0309 16:39:24.366199 7604 scope.go:117] "RemoveContainer" containerID="0890855b3b5026503838ed97808495935321e600acd88d8055621af6b2d87521" Mar 09 16:39:24.727952 master-0 kubenswrapper[7604]: I0309 16:39:24.727881 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:24.727952 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:24.727952 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:24.727952 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:24.727952 master-0 kubenswrapper[7604]: I0309 16:39:24.727942 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:25.381058 master-0 kubenswrapper[7604]: I0309 16:39:25.380968 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" event={"ID":"6c4dfdcc-e182-4831-98e4-1eedb069bcf6","Type":"ContainerStarted","Data":"234f04e567792ed196946755e5365b142bcf5f5493c97da95df65ef97e55acf5"} Mar 09 16:39:25.730449 master-0 kubenswrapper[7604]: I0309 16:39:25.728941 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:25.730449 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:25.730449 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:25.730449 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:25.730449 master-0 kubenswrapper[7604]: I0309 16:39:25.729044 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:26.392601 master-0 kubenswrapper[7604]: I0309 16:39:26.392131 7604 generic.go:334] "Generic (PLEG): container finished" podID="631f2bdf-2ed4-4315-98c3-c5a538d0aec3" containerID="ec2bd4079a912677c69adce5f15ccbeec93411cab07eef7010dd35a99bc07993" exitCode=0 Mar 09 16:39:26.392601 master-0 kubenswrapper[7604]: I0309 16:39:26.392210 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" event={"ID":"631f2bdf-2ed4-4315-98c3-c5a538d0aec3","Type":"ContainerDied","Data":"ec2bd4079a912677c69adce5f15ccbeec93411cab07eef7010dd35a99bc07993"} Mar 09 16:39:26.393400 master-0 kubenswrapper[7604]: I0309 16:39:26.393033 7604 scope.go:117] "RemoveContainer" containerID="ec2bd4079a912677c69adce5f15ccbeec93411cab07eef7010dd35a99bc07993" Mar 09 16:39:26.729455 master-0 kubenswrapper[7604]: I0309 16:39:26.729312 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:26.729455 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:26.729455 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:26.729455 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:26.729915 master-0 kubenswrapper[7604]: I0309 16:39:26.729466 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:26.966282 master-0 kubenswrapper[7604]: E0309 16:39:26.966154 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:39:27.405999 master-0 kubenswrapper[7604]: I0309 16:39:27.405900 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" event={"ID":"631f2bdf-2ed4-4315-98c3-c5a538d0aec3","Type":"ContainerStarted","Data":"c85c2e65863c51954601cbeb134dcddb20bc176e345ebf21d97c2b9b3da1d7a3"} Mar 09 16:39:27.728761 master-0 kubenswrapper[7604]: I0309 16:39:27.728641 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:27.728761 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:27.728761 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:27.728761 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:27.728761 master-0 kubenswrapper[7604]: I0309 16:39:27.728749 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:28.728863 master-0 kubenswrapper[7604]: I0309 16:39:28.728766 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:28.728863 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:28.728863 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:28.728863 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:28.728863 master-0 kubenswrapper[7604]: I0309 16:39:28.728858 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:29.112164 master-0 kubenswrapper[7604]: I0309 16:39:29.111981 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:29.112385 master-0 kubenswrapper[7604]: E0309 16:39:29.112361 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:29.728736 master-0 kubenswrapper[7604]: I0309 16:39:29.728616 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:29.728736 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:29.728736 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:29.728736 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:29.729642 master-0 kubenswrapper[7604]: I0309 16:39:29.728761 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:30.436075 master-0 kubenswrapper[7604]: I0309 16:39:30.435962 7604 generic.go:334] "Generic (PLEG): container finished" podID="e2e38be5-1d33-4171-b27f-78a335f1590b" containerID="26536dc0c3eb884535f611edd83aab852a51eeb18c5af26fe55fde4610066f56" exitCode=0 Mar 09 16:39:30.436075 master-0 kubenswrapper[7604]: I0309 16:39:30.436046 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" event={"ID":"e2e38be5-1d33-4171-b27f-78a335f1590b","Type":"ContainerDied","Data":"26536dc0c3eb884535f611edd83aab852a51eeb18c5af26fe55fde4610066f56"} Mar 09 16:39:30.436075 master-0 kubenswrapper[7604]: I0309 16:39:30.436123 7604 scope.go:117] "RemoveContainer" containerID="aae9b4fa27818489ab82742a1d088f45fbd99626e96c87f0d251b8c8d0c8bce4" Mar 09 16:39:30.437051 master-0 kubenswrapper[7604]: I0309 16:39:30.436947 7604 scope.go:117] "RemoveContainer" containerID="26536dc0c3eb884535f611edd83aab852a51eeb18c5af26fe55fde4610066f56" Mar 09 16:39:30.728970 master-0 kubenswrapper[7604]: I0309 16:39:30.728750 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:30.728970 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:30.728970 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:30.728970 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:30.728970 master-0 kubenswrapper[7604]: I0309 16:39:30.728868 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:31.449133 master-0 kubenswrapper[7604]: I0309 16:39:31.449068 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" event={"ID":"e2e38be5-1d33-4171-b27f-78a335f1590b","Type":"ContainerStarted","Data":"8f5910182b3dc81f18bd10c9c09badb9bae6b55195eded2ec7d6b0b507178eda"} Mar 09 16:39:31.729513 master-0 kubenswrapper[7604]: I0309 16:39:31.729453 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:31.729513 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:31.729513 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:31.729513 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:31.730194 master-0 kubenswrapper[7604]: I0309 16:39:31.729537 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:32.462042 master-0 kubenswrapper[7604]: I0309 16:39:32.461932 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613" exitCode=0 Mar 09 16:39:32.462622 master-0 kubenswrapper[7604]: I0309 16:39:32.462591 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613"} Mar 09 16:39:32.463466 master-0 kubenswrapper[7604]: I0309 16:39:32.463447 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:32.463554 master-0 kubenswrapper[7604]: I0309 16:39:32.463542 7604 scope.go:117] "RemoveContainer" containerID="f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613" Mar 09 16:39:32.466286 master-0 kubenswrapper[7604]: I0309 16:39:32.466222 7604 generic.go:334] "Generic (PLEG): container finished" podID="af4aa8d4-09e1-4589-b7bf-885617a11337" containerID="f2698e39e3b5a035604353ee09cee0739a68806bc558360103357b0dbe104e2f" exitCode=0 Mar 09 16:39:32.466850 master-0 kubenswrapper[7604]: I0309 16:39:32.466394 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" event={"ID":"af4aa8d4-09e1-4589-b7bf-885617a11337","Type":"ContainerDied","Data":"f2698e39e3b5a035604353ee09cee0739a68806bc558360103357b0dbe104e2f"} Mar 09 16:39:32.467170 master-0 kubenswrapper[7604]: I0309 16:39:32.467122 7604 scope.go:117] "RemoveContainer" containerID="f2698e39e3b5a035604353ee09cee0739a68806bc558360103357b0dbe104e2f" Mar 09 16:39:32.468960 master-0 kubenswrapper[7604]: I0309 16:39:32.468905 7604 generic.go:334] "Generic (PLEG): container finished" podID="a62ba179-443d-424f-8cff-c75677e8cd5c" containerID="556fa937e7c3581b8c9b14e4926a7f4f60005bc952c23b42c146238b8e0e37d0" exitCode=0 Mar 09 16:39:32.469214 master-0 kubenswrapper[7604]: I0309 16:39:32.469036 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" event={"ID":"a62ba179-443d-424f-8cff-c75677e8cd5c","Type":"ContainerDied","Data":"556fa937e7c3581b8c9b14e4926a7f4f60005bc952c23b42c146238b8e0e37d0"} Mar 09 16:39:32.469772 master-0 kubenswrapper[7604]: I0309 16:39:32.469736 7604 scope.go:117] "RemoveContainer" containerID="556fa937e7c3581b8c9b14e4926a7f4f60005bc952c23b42c146238b8e0e37d0" Mar 09 16:39:32.474204 master-0 kubenswrapper[7604]: I0309 16:39:32.474143 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-gglsc_dc732d23-37bc-41c2-9f9b-333ba517c1f8/cluster-node-tuning-operator/0.log" Mar 09 16:39:32.474322 master-0 kubenswrapper[7604]: I0309 16:39:32.474227 7604 generic.go:334] "Generic (PLEG): container finished" podID="dc732d23-37bc-41c2-9f9b-333ba517c1f8" containerID="25a7ab145b0763001053c074ce2286add5df023f3e9455ff678697bf2aec9346" exitCode=1 Mar 09 16:39:32.474383 master-0 kubenswrapper[7604]: I0309 16:39:32.474301 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" event={"ID":"dc732d23-37bc-41c2-9f9b-333ba517c1f8","Type":"ContainerDied","Data":"25a7ab145b0763001053c074ce2286add5df023f3e9455ff678697bf2aec9346"} Mar 09 16:39:32.476385 master-0 kubenswrapper[7604]: I0309 16:39:32.476327 7604 scope.go:117] "RemoveContainer" containerID="25a7ab145b0763001053c074ce2286add5df023f3e9455ff678697bf2aec9346" Mar 09 16:39:32.733251 master-0 kubenswrapper[7604]: I0309 16:39:32.733168 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:32.733251 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:32.733251 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:32.733251 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:32.734175 master-0 kubenswrapper[7604]: I0309 16:39:32.733287 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:32.764281 master-0 kubenswrapper[7604]: E0309 16:39:32.764164 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:33.487196 master-0 kubenswrapper[7604]: I0309 16:39:33.487129 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-gglsc_dc732d23-37bc-41c2-9f9b-333ba517c1f8/cluster-node-tuning-operator/0.log" Mar 09 16:39:33.487677 master-0 kubenswrapper[7604]: I0309 16:39:33.487261 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" event={"ID":"dc732d23-37bc-41c2-9f9b-333ba517c1f8","Type":"ContainerStarted","Data":"13f8c9536796a25a0e9343287f8e9e24f1fd90bf2e457aaa3e3ed6c6aa8d248e"} Mar 09 16:39:33.494942 master-0 kubenswrapper[7604]: I0309 16:39:33.494862 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138"} Mar 09 16:39:33.496178 master-0 kubenswrapper[7604]: I0309 16:39:33.496147 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:33.496531 master-0 kubenswrapper[7604]: E0309 16:39:33.496492 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:33.507536 master-0 kubenswrapper[7604]: I0309 16:39:33.506758 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" event={"ID":"af4aa8d4-09e1-4589-b7bf-885617a11337","Type":"ContainerStarted","Data":"adff73492035d2f5df06ef0287673a9f58659cf3248fe08838942023056fcf94"} Mar 09 16:39:33.523989 master-0 kubenswrapper[7604]: I0309 16:39:33.523923 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" event={"ID":"a62ba179-443d-424f-8cff-c75677e8cd5c","Type":"ContainerStarted","Data":"465fb22f8bca2c0c5997758a0de13609b1778a95a046b98baa16220ee5e06204"} Mar 09 16:39:33.731965 master-0 kubenswrapper[7604]: I0309 16:39:33.729834 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:33.731965 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:33.731965 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:33.731965 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:33.731965 master-0 kubenswrapper[7604]: I0309 16:39:33.729933 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:34.534480 master-0 kubenswrapper[7604]: I0309 16:39:34.534408 7604 generic.go:334] "Generic (PLEG): container finished" podID="eaf7dea5-9848-41f0-bf0b-ec70ec0380f1" containerID="cf28483378cea782ea700907bc68169878c403e836eb639a2889f087184ba71c" exitCode=0 Mar 09 16:39:34.535337 master-0 kubenswrapper[7604]: I0309 16:39:34.534506 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" event={"ID":"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1","Type":"ContainerDied","Data":"cf28483378cea782ea700907bc68169878c403e836eb639a2889f087184ba71c"} Mar 09 16:39:34.536047 master-0 kubenswrapper[7604]: I0309 16:39:34.536005 7604 scope.go:117] "RemoveContainer" containerID="cf28483378cea782ea700907bc68169878c403e836eb639a2889f087184ba71c" Mar 09 16:39:34.731023 master-0 kubenswrapper[7604]: I0309 16:39:34.730924 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:34.731023 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:34.731023 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:34.731023 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:34.731565 master-0 kubenswrapper[7604]: I0309 16:39:34.731062 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:35.020763 master-0 kubenswrapper[7604]: I0309 16:39:35.020640 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:39:35.021950 master-0 kubenswrapper[7604]: I0309 16:39:35.021893 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:35.022313 master-0 kubenswrapper[7604]: E0309 16:39:35.022261 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:35.112565 master-0 kubenswrapper[7604]: I0309 16:39:35.112495 7604 scope.go:117] "RemoveContainer" containerID="76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd" Mar 09 16:39:35.546166 master-0 kubenswrapper[7604]: I0309 16:39:35.545946 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" event={"ID":"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1","Type":"ContainerStarted","Data":"9901d2dd67abdebeac6d6c7614d60c3f4ee8d77146c76fb475ac6e8494d8f7d4"} Mar 09 16:39:35.550129 master-0 kubenswrapper[7604]: I0309 16:39:35.550068 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/3.log" Mar 09 16:39:35.550399 master-0 kubenswrapper[7604]: I0309 16:39:35.550146 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" event={"ID":"57036838-9f42-4ea1-a5c9-77f820cc22c9","Type":"ContainerStarted","Data":"3f2e006a781b880b2d0399293929335a6c3b306f3c836fce50f11629e7784641"} Mar 09 16:39:35.729609 master-0 kubenswrapper[7604]: I0309 16:39:35.729480 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:35.729609 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:35.729609 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:35.729609 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:35.730009 master-0 kubenswrapper[7604]: I0309 16:39:35.729729 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:36.729524 master-0 kubenswrapper[7604]: I0309 16:39:36.729377 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:36.729524 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:36.729524 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:36.729524 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:36.729524 master-0 kubenswrapper[7604]: I0309 16:39:36.729545 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:37.729352 master-0 kubenswrapper[7604]: I0309 16:39:37.729251 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:37.729352 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:37.729352 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:37.729352 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:37.730166 master-0 kubenswrapper[7604]: I0309 16:39:37.729376 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:38.730651 master-0 kubenswrapper[7604]: I0309 16:39:38.730504 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:38.730651 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:38.730651 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:38.730651 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:38.731552 master-0 kubenswrapper[7604]: I0309 16:39:38.730663 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:39.591680 master-0 kubenswrapper[7604]: I0309 16:39:39.591575 7604 generic.go:334] "Generic (PLEG): container finished" podID="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" containerID="5d8c100b8bc3cd727e168a74c2e48d870e8a9516215f22c217ef9c223c8bfc22" exitCode=0 Mar 09 16:39:39.591680 master-0 kubenswrapper[7604]: I0309 16:39:39.591613 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" event={"ID":"6cf9eae5-38bc-48fa-8339-d0751bb18e8c","Type":"ContainerDied","Data":"5d8c100b8bc3cd727e168a74c2e48d870e8a9516215f22c217ef9c223c8bfc22"} Mar 09 16:39:39.591680 master-0 kubenswrapper[7604]: I0309 16:39:39.591689 7604 scope.go:117] "RemoveContainer" containerID="1e5e32f0f63434eb2622b072a5c0a325920460736fce227cb33b7dd8fc950069" Mar 09 16:39:39.592588 master-0 kubenswrapper[7604]: I0309 16:39:39.592542 7604 scope.go:117] "RemoveContainer" containerID="5d8c100b8bc3cd727e168a74c2e48d870e8a9516215f22c217ef9c223c8bfc22" Mar 09 16:39:39.597507 master-0 kubenswrapper[7604]: I0309 16:39:39.597260 7604 generic.go:334] "Generic (PLEG): container finished" podID="d6912539-9b06-4e2c-b6a8-155df31147f2" containerID="cd7efe315849cdb3199a98f6f5c36f77f4fa9f5957ff9a8e14c0814b556fdc59" exitCode=0 Mar 09 16:39:39.597507 master-0 kubenswrapper[7604]: I0309 16:39:39.597324 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" event={"ID":"d6912539-9b06-4e2c-b6a8-155df31147f2","Type":"ContainerDied","Data":"cd7efe315849cdb3199a98f6f5c36f77f4fa9f5957ff9a8e14c0814b556fdc59"} Mar 09 16:39:39.598363 master-0 kubenswrapper[7604]: I0309 16:39:39.598314 7604 scope.go:117] "RemoveContainer" containerID="cd7efe315849cdb3199a98f6f5c36f77f4fa9f5957ff9a8e14c0814b556fdc59" Mar 09 16:39:39.636770 master-0 kubenswrapper[7604]: I0309 16:39:39.636682 7604 scope.go:117] "RemoveContainer" containerID="a517766120d5207dbc0746849224568d7e6239234bc628933b81ef9e4c5bff53" Mar 09 16:39:39.729077 master-0 kubenswrapper[7604]: I0309 16:39:39.728987 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:39.729077 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:39.729077 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:39.729077 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:39.729871 master-0 kubenswrapper[7604]: I0309 16:39:39.729815 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:40.416650 master-0 kubenswrapper[7604]: I0309 16:39:40.416537 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:39:40.417695 master-0 kubenswrapper[7604]: I0309 16:39:40.417455 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:40.417829 master-0 kubenswrapper[7604]: E0309 16:39:40.417775 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:40.614384 master-0 kubenswrapper[7604]: I0309 16:39:40.614304 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" event={"ID":"6cf9eae5-38bc-48fa-8339-d0751bb18e8c","Type":"ContainerStarted","Data":"4bdcaeef9bb19b450661528f6e50a89c78fbbf3bdd68f7c7795014a8cbe22ae3"} Mar 09 16:39:40.620259 master-0 kubenswrapper[7604]: I0309 16:39:40.620150 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" event={"ID":"d6912539-9b06-4e2c-b6a8-155df31147f2","Type":"ContainerStarted","Data":"e27c96a6ca3980378332d0a705d84dceaf33ff987028e216b3f0f2835afa5d96"} Mar 09 16:39:40.729535 master-0 kubenswrapper[7604]: I0309 16:39:40.729399 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:40.729535 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:40.729535 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:40.729535 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:40.730097 master-0 kubenswrapper[7604]: I0309 16:39:40.729570 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:41.729236 master-0 kubenswrapper[7604]: I0309 16:39:41.729143 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:41.729236 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:41.729236 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:41.729236 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:41.730057 master-0 kubenswrapper[7604]: I0309 16:39:41.729252 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:42.728632 master-0 kubenswrapper[7604]: I0309 16:39:42.728531 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:42.728632 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:42.728632 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:42.728632 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:42.729011 master-0 kubenswrapper[7604]: I0309 16:39:42.728649 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:43.417343 master-0 kubenswrapper[7604]: I0309 16:39:43.417242 7604 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 09 16:39:43.728762 master-0 kubenswrapper[7604]: I0309 16:39:43.728673 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:43.728762 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:43.728762 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:43.728762 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:43.729276 master-0 kubenswrapper[7604]: I0309 16:39:43.728789 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:43.967885 master-0 kubenswrapper[7604]: E0309 16:39:43.967505 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 09 16:39:44.729563 master-0 kubenswrapper[7604]: I0309 16:39:44.729472 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:44.729563 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:44.729563 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:44.729563 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:44.730479 master-0 kubenswrapper[7604]: I0309 16:39:44.729610 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:45.737539 master-0 kubenswrapper[7604]: I0309 16:39:45.735923 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:45.737539 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:45.737539 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:45.737539 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:45.737539 master-0 kubenswrapper[7604]: I0309 16:39:45.736032 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:46.425528 master-0 kubenswrapper[7604]: I0309 16:39:46.425308 7604 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-xzwh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 09 16:39:46.425528 master-0 kubenswrapper[7604]: I0309 16:39:46.425409 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" podUID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 09 16:39:46.670446 master-0 kubenswrapper[7604]: I0309 16:39:46.670238 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-nmvdk_d2d3c20a-f92e-433b-9fbc-b667b7bcf175/openshift-controller-manager-operator/0.log" Mar 09 16:39:46.670446 master-0 kubenswrapper[7604]: I0309 16:39:46.670327 7604 generic.go:334] "Generic (PLEG): container finished" podID="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" containerID="b1c16a3899be6493dfcbe845944c02e0cb586d0232ff82e821db925b84a7b8fd" exitCode=0 Mar 09 16:39:46.672463 master-0 kubenswrapper[7604]: I0309 16:39:46.670396 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" event={"ID":"d2d3c20a-f92e-433b-9fbc-b667b7bcf175","Type":"ContainerDied","Data":"b1c16a3899be6493dfcbe845944c02e0cb586d0232ff82e821db925b84a7b8fd"} Mar 09 16:39:46.672542 master-0 kubenswrapper[7604]: I0309 16:39:46.672504 7604 scope.go:117] "RemoveContainer" containerID="cc3b26ecc6db80d8920394a2785316da766a94e7ed17c29a0dba7776c2765c20" Mar 09 16:39:46.673411 master-0 kubenswrapper[7604]: I0309 16:39:46.673373 7604 scope.go:117] "RemoveContainer" containerID="b1c16a3899be6493dfcbe845944c02e0cb586d0232ff82e821db925b84a7b8fd" Mar 09 16:39:46.678689 master-0 kubenswrapper[7604]: I0309 16:39:46.678256 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerDied","Data":"0c53bd04ab08a6dcf8bec8933ab495e84121056b0c52db4cc518d1487933ea5c"} Mar 09 16:39:46.678909 master-0 kubenswrapper[7604]: I0309 16:39:46.678881 7604 scope.go:117] "RemoveContainer" containerID="0c53bd04ab08a6dcf8bec8933ab495e84121056b0c52db4cc518d1487933ea5c" Mar 09 16:39:46.679649 master-0 kubenswrapper[7604]: I0309 16:39:46.678066 7604 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="0c53bd04ab08a6dcf8bec8933ab495e84121056b0c52db4cc518d1487933ea5c" exitCode=0 Mar 09 16:39:46.683188 master-0 kubenswrapper[7604]: I0309 16:39:46.683146 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-4qg6v_a6cd9347-eec9-4549-9de4-6033112634ce/machine-api-operator/0.log" Mar 09 16:39:46.683749 master-0 kubenswrapper[7604]: I0309 16:39:46.683703 7604 generic.go:334] "Generic (PLEG): container finished" podID="a6cd9347-eec9-4549-9de4-6033112634ce" containerID="4a72ada443de84c13a8cbe47843e972a9ed55f3d914623df43cbb70dacd90962" exitCode=255 Mar 09 16:39:46.683851 master-0 kubenswrapper[7604]: I0309 16:39:46.683805 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" event={"ID":"a6cd9347-eec9-4549-9de4-6033112634ce","Type":"ContainerDied","Data":"4a72ada443de84c13a8cbe47843e972a9ed55f3d914623df43cbb70dacd90962"} Mar 09 16:39:46.684587 master-0 kubenswrapper[7604]: I0309 16:39:46.684550 7604 scope.go:117] "RemoveContainer" containerID="4a72ada443de84c13a8cbe47843e972a9ed55f3d914623df43cbb70dacd90962" Mar 09 16:39:46.690478 master-0 kubenswrapper[7604]: I0309 16:39:46.690027 7604 generic.go:334] "Generic (PLEG): container finished" podID="34a4491c-12cc-4531-ad3e-246e93ed7842" containerID="49dd8e161cea6212329f1712e1bf4a0806751557004321c54967d70157f3883b" exitCode=0 Mar 09 16:39:46.690478 master-0 kubenswrapper[7604]: I0309 16:39:46.690126 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" event={"ID":"34a4491c-12cc-4531-ad3e-246e93ed7842","Type":"ContainerDied","Data":"49dd8e161cea6212329f1712e1bf4a0806751557004321c54967d70157f3883b"} Mar 09 16:39:46.690960 master-0 kubenswrapper[7604]: I0309 16:39:46.690920 7604 scope.go:117] "RemoveContainer" containerID="49dd8e161cea6212329f1712e1bf4a0806751557004321c54967d70157f3883b" Mar 09 16:39:46.697264 master-0 kubenswrapper[7604]: I0309 16:39:46.697204 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-r82z7_5565c060-5952-4e85-8873-18bb80663924/network-operator/0.log" Mar 09 16:39:46.697443 master-0 kubenswrapper[7604]: I0309 16:39:46.697294 7604 generic.go:334] "Generic (PLEG): container finished" podID="5565c060-5952-4e85-8873-18bb80663924" containerID="dda1c1f36a6b6d9ac75b2bd00d887fa58cc2391c73527d2f8cbd81621d10c3e4" exitCode=0 Mar 09 16:39:46.699511 master-0 kubenswrapper[7604]: I0309 16:39:46.697488 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" event={"ID":"5565c060-5952-4e85-8873-18bb80663924","Type":"ContainerDied","Data":"dda1c1f36a6b6d9ac75b2bd00d887fa58cc2391c73527d2f8cbd81621d10c3e4"} Mar 09 16:39:46.699511 master-0 kubenswrapper[7604]: I0309 16:39:46.698583 7604 scope.go:117] "RemoveContainer" containerID="dda1c1f36a6b6d9ac75b2bd00d887fa58cc2391c73527d2f8cbd81621d10c3e4" Mar 09 16:39:46.702931 master-0 kubenswrapper[7604]: I0309 16:39:46.702878 7604 generic.go:334] "Generic (PLEG): container finished" podID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerID="58ca4bfd8d3d92cf6b0638eb596cecb093134580ce5c529622e4707ab6f67862" exitCode=0 Mar 09 16:39:46.703039 master-0 kubenswrapper[7604]: I0309 16:39:46.702987 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" event={"ID":"8677cbd3-649f-41cd-8b8a-eadca971906b","Type":"ContainerDied","Data":"58ca4bfd8d3d92cf6b0638eb596cecb093134580ce5c529622e4707ab6f67862"} Mar 09 16:39:46.704111 master-0 kubenswrapper[7604]: I0309 16:39:46.704069 7604 scope.go:117] "RemoveContainer" containerID="58ca4bfd8d3d92cf6b0638eb596cecb093134580ce5c529622e4707ab6f67862" Mar 09 16:39:46.705904 master-0 kubenswrapper[7604]: I0309 16:39:46.705842 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-xzwh9_457f42a7-f14c-4d61-a87a-bc1ed422feed/openshift-config-operator/1.log" Mar 09 16:39:46.706463 master-0 kubenswrapper[7604]: I0309 16:39:46.706365 7604 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="51cc97980a013ef784c30d027db741202e1e61692ca828907c9b9adb40652a56" exitCode=0 Mar 09 16:39:46.706898 master-0 kubenswrapper[7604]: I0309 16:39:46.706471 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerDied","Data":"51cc97980a013ef784c30d027db741202e1e61692ca828907c9b9adb40652a56"} Mar 09 16:39:46.706997 master-0 kubenswrapper[7604]: I0309 16:39:46.706971 7604 scope.go:117] "RemoveContainer" containerID="51cc97980a013ef784c30d027db741202e1e61692ca828907c9b9adb40652a56" Mar 09 16:39:46.715905 master-0 kubenswrapper[7604]: I0309 16:39:46.715826 7604 scope.go:117] "RemoveContainer" containerID="20c3af1506f68ad55d72af72ba11892a7b1fbea246aad319e67c6ab36a77fae2" Mar 09 16:39:46.716397 master-0 kubenswrapper[7604]: I0309 16:39:46.716359 7604 generic.go:334] "Generic (PLEG): container finished" podID="2e765395-7c6b-4cba-9a5a-37ba888722bb" containerID="1765d222fa51dc975cebdd1bdcaa4ce3c6b31334b8d1330af7de3940a2e5ca59" exitCode=0 Mar 09 16:39:46.716480 master-0 kubenswrapper[7604]: I0309 16:39:46.716459 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" event={"ID":"2e765395-7c6b-4cba-9a5a-37ba888722bb","Type":"ContainerDied","Data":"1765d222fa51dc975cebdd1bdcaa4ce3c6b31334b8d1330af7de3940a2e5ca59"} Mar 09 16:39:46.717200 master-0 kubenswrapper[7604]: I0309 16:39:46.717141 7604 scope.go:117] "RemoveContainer" containerID="1765d222fa51dc975cebdd1bdcaa4ce3c6b31334b8d1330af7de3940a2e5ca59" Mar 09 16:39:46.722072 master-0 kubenswrapper[7604]: I0309 16:39:46.721765 7604 generic.go:334] "Generic (PLEG): container finished" podID="d6b4992e-50f3-473c-aa83-ed35569ba307" containerID="81a061ad8b3b8276fdddd4547781d1739b9b814b6efb0c8aa846322d762aeea4" exitCode=0 Mar 09 16:39:46.722072 master-0 kubenswrapper[7604]: I0309 16:39:46.721812 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" event={"ID":"d6b4992e-50f3-473c-aa83-ed35569ba307","Type":"ContainerDied","Data":"81a061ad8b3b8276fdddd4547781d1739b9b814b6efb0c8aa846322d762aeea4"} Mar 09 16:39:46.722072 master-0 kubenswrapper[7604]: I0309 16:39:46.721984 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:39:46.722823 master-0 kubenswrapper[7604]: I0309 16:39:46.722786 7604 scope.go:117] "RemoveContainer" containerID="81a061ad8b3b8276fdddd4547781d1739b9b814b6efb0c8aa846322d762aeea4" Mar 09 16:39:46.732288 master-0 kubenswrapper[7604]: I0309 16:39:46.729821 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jzjhh_8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/cluster-autoscaler-operator/0.log" Mar 09 16:39:46.732288 master-0 kubenswrapper[7604]: I0309 16:39:46.730505 7604 generic.go:334] "Generic (PLEG): container finished" podID="8d1829b3-643f-4f79-b6de-ae6ca5e78d4a" containerID="e8cb30c90125a1e3b3eb6f6752eb090667969ca7a1ad05a2f50043a22d1558b3" exitCode=255 Mar 09 16:39:46.732288 master-0 kubenswrapper[7604]: I0309 16:39:46.730576 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" event={"ID":"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a","Type":"ContainerDied","Data":"e8cb30c90125a1e3b3eb6f6752eb090667969ca7a1ad05a2f50043a22d1558b3"} Mar 09 16:39:46.732898 master-0 kubenswrapper[7604]: I0309 16:39:46.732868 7604 scope.go:117] "RemoveContainer" containerID="e8cb30c90125a1e3b3eb6f6752eb090667969ca7a1ad05a2f50043a22d1558b3" Mar 09 16:39:46.733680 master-0 kubenswrapper[7604]: I0309 16:39:46.733586 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:39:46.733680 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:39:46.733680 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:39:46.733680 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:39:46.733680 master-0 kubenswrapper[7604]: I0309 16:39:46.733639 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:39:46.734250 master-0 kubenswrapper[7604]: I0309 16:39:46.733710 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:39:46.734514 master-0 kubenswrapper[7604]: I0309 16:39:46.734456 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4c1869d3a7ddcc58f5543caea28428d5b999484c891498c9388eaad6a5d85b10"} pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" containerMessage="Container router failed startup probe, will be restarted" Mar 09 16:39:46.734670 master-0 kubenswrapper[7604]: I0309 16:39:46.734506 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" containerID="cri-o://4c1869d3a7ddcc58f5543caea28428d5b999484c891498c9388eaad6a5d85b10" gracePeriod=3600 Mar 09 16:39:46.740297 master-0 kubenswrapper[7604]: I0309 16:39:46.738957 7604 generic.go:334] "Generic (PLEG): container finished" podID="8972b380-8f87-4b73-8f95-440d34d03884" containerID="478050fc5a610db3a7ffbb70974c16fcbc1a3e86ff4bd2cba7f1c2f94f7b4a39" exitCode=0 Mar 09 16:39:46.740297 master-0 kubenswrapper[7604]: I0309 16:39:46.739059 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" event={"ID":"8972b380-8f87-4b73-8f95-440d34d03884","Type":"ContainerDied","Data":"478050fc5a610db3a7ffbb70974c16fcbc1a3e86ff4bd2cba7f1c2f94f7b4a39"} Mar 09 16:39:46.740297 master-0 kubenswrapper[7604]: I0309 16:39:46.739866 7604 scope.go:117] "RemoveContainer" containerID="478050fc5a610db3a7ffbb70974c16fcbc1a3e86ff4bd2cba7f1c2f94f7b4a39" Mar 09 16:39:46.782042 master-0 kubenswrapper[7604]: I0309 16:39:46.779628 7604 generic.go:334] "Generic (PLEG): container finished" podID="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" containerID="6e9c4ef8e54a1ddaaeace68d16cbf279e55f0b1084e638b1cbf0208c30f75c2d" exitCode=0 Mar 09 16:39:46.782042 master-0 kubenswrapper[7604]: I0309 16:39:46.779841 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" event={"ID":"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a","Type":"ContainerDied","Data":"6e9c4ef8e54a1ddaaeace68d16cbf279e55f0b1084e638b1cbf0208c30f75c2d"} Mar 09 16:39:46.782042 master-0 kubenswrapper[7604]: I0309 16:39:46.780692 7604 scope.go:117] "RemoveContainer" containerID="6e9c4ef8e54a1ddaaeace68d16cbf279e55f0b1084e638b1cbf0208c30f75c2d" Mar 09 16:39:46.785869 master-0 kubenswrapper[7604]: I0309 16:39:46.785065 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv_f965b971-7e9a-4513-8450-b2b527609bd6/package-server-manager/0.log" Mar 09 16:39:46.786874 master-0 kubenswrapper[7604]: I0309 16:39:46.786812 7604 generic.go:334] "Generic (PLEG): container finished" podID="f965b971-7e9a-4513-8450-b2b527609bd6" containerID="6d5f471d38ab26de2789bb7383ccfd1af1a0996fc7de4e1ac556541f152b9d74" exitCode=1 Mar 09 16:39:46.787032 master-0 kubenswrapper[7604]: I0309 16:39:46.786990 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" event={"ID":"f965b971-7e9a-4513-8450-b2b527609bd6","Type":"ContainerDied","Data":"6d5f471d38ab26de2789bb7383ccfd1af1a0996fc7de4e1ac556541f152b9d74"} Mar 09 16:39:46.787913 master-0 kubenswrapper[7604]: I0309 16:39:46.787868 7604 scope.go:117] "RemoveContainer" containerID="6d5f471d38ab26de2789bb7383ccfd1af1a0996fc7de4e1ac556541f152b9d74" Mar 09 16:39:46.795878 master-0 kubenswrapper[7604]: I0309 16:39:46.794083 7604 generic.go:334] "Generic (PLEG): container finished" podID="3a612208-f777-486f-9dde-048b2d898c7f" containerID="7559e3794c2b375f42338baad89cc8a6296746d7de572bec45d4f7ebb08433c6" exitCode=0 Mar 09 16:39:46.795878 master-0 kubenswrapper[7604]: I0309 16:39:46.794298 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" event={"ID":"3a612208-f777-486f-9dde-048b2d898c7f","Type":"ContainerDied","Data":"7559e3794c2b375f42338baad89cc8a6296746d7de572bec45d4f7ebb08433c6"} Mar 09 16:39:46.795878 master-0 kubenswrapper[7604]: I0309 16:39:46.795076 7604 scope.go:117] "RemoveContainer" containerID="7559e3794c2b375f42338baad89cc8a6296746d7de572bec45d4f7ebb08433c6" Mar 09 16:39:46.804956 master-0 kubenswrapper[7604]: I0309 16:39:46.804864 7604 generic.go:334] "Generic (PLEG): container finished" podID="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" containerID="5a53068d3aa0add7405bb4afae02f9c31d2802806c126fb434c8dcf05fc615e2" exitCode=0 Mar 09 16:39:46.804956 master-0 kubenswrapper[7604]: I0309 16:39:46.804937 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" event={"ID":"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d","Type":"ContainerDied","Data":"5a53068d3aa0add7405bb4afae02f9c31d2802806c126fb434c8dcf05fc615e2"} Mar 09 16:39:46.805801 master-0 kubenswrapper[7604]: I0309 16:39:46.805761 7604 scope.go:117] "RemoveContainer" containerID="5a53068d3aa0add7405bb4afae02f9c31d2802806c126fb434c8dcf05fc615e2" Mar 09 16:39:46.831800 master-0 kubenswrapper[7604]: I0309 16:39:46.831747 7604 scope.go:117] "RemoveContainer" containerID="fa5ddd5802e33c8a6619b86d4545b8a3364c98e851507c10917062099a64157c" Mar 09 16:39:46.999262 master-0 kubenswrapper[7604]: I0309 16:39:46.999188 7604 scope.go:117] "RemoveContainer" containerID="a8d177dbb3aa3504d7da8194a33995b9c5590e73006f731e32a19254943a15e2" Mar 09 16:39:47.277115 master-0 kubenswrapper[7604]: I0309 16:39:47.277002 7604 scope.go:117] "RemoveContainer" containerID="97231a996b3f971d2df45300f8add68d0e10efa9719fb86375b4c767d77ae7f2" Mar 09 16:39:47.349119 master-0 kubenswrapper[7604]: I0309 16:39:47.349081 7604 scope.go:117] "RemoveContainer" containerID="ed8140bb922b35373782d1b39705b1d6200c0f0fb01785807a86c3fad481d2c8" Mar 09 16:39:47.472079 master-0 kubenswrapper[7604]: I0309 16:39:47.471619 7604 scope.go:117] "RemoveContainer" containerID="a68cd08d6d3f33869738052123770a9d77db899c72df9e881a8184753514b484" Mar 09 16:39:47.520229 master-0 kubenswrapper[7604]: I0309 16:39:47.520196 7604 scope.go:117] "RemoveContainer" containerID="8fc1f9c122b644d42570f9573ceb86c8b66b157aee149e8b75a17dc9c0fc5570" Mar 09 16:39:47.560880 master-0 kubenswrapper[7604]: I0309 16:39:47.560819 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:39:47.561273 master-0 kubenswrapper[7604]: I0309 16:39:47.561214 7604 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:39:47.817799 master-0 kubenswrapper[7604]: I0309 16:39:47.817601 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" event={"ID":"d2d3c20a-f92e-433b-9fbc-b667b7bcf175","Type":"ContainerStarted","Data":"5064bf1b7a060c603235947815f0e946ceee863595782d95594e1678d0cd4812"} Mar 09 16:39:47.821580 master-0 kubenswrapper[7604]: I0309 16:39:47.821516 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jzjhh_8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/cluster-autoscaler-operator/0.log" Mar 09 16:39:47.822078 master-0 kubenswrapper[7604]: I0309 16:39:47.822020 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" event={"ID":"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a","Type":"ContainerStarted","Data":"1deaf68a814cc35632e0b8f0b827da8b1ddc94b11df2a48d1036bd04cf9dd3b7"} Mar 09 16:39:47.827555 master-0 kubenswrapper[7604]: I0309 16:39:47.827494 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" event={"ID":"8677cbd3-649f-41cd-8b8a-eadca971906b","Type":"ContainerStarted","Data":"3154e133fbf2500b6ea42f7db977fa73d4cbaf642b7311e9ee095fda1f327ff1"} Mar 09 16:39:47.829154 master-0 kubenswrapper[7604]: I0309 16:39:47.829091 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:39:47.832463 master-0 kubenswrapper[7604]: I0309 16:39:47.832391 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" event={"ID":"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a","Type":"ContainerStarted","Data":"baaddfcc998cd7abfc03c3c44cfb8f0e854de5a6da7f9002ed1c30a1e5164616"} Mar 09 16:39:47.848166 master-0 kubenswrapper[7604]: I0309 16:39:47.848062 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" event={"ID":"3a612208-f777-486f-9dde-048b2d898c7f","Type":"ContainerStarted","Data":"6f0f1ec80c34c93408da145c464e859a25fbafc42585bf642a2b4e0f1a9406f8"} Mar 09 16:39:47.869105 master-0 kubenswrapper[7604]: I0309 16:39:47.868661 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" event={"ID":"d6b4992e-50f3-473c-aa83-ed35569ba307","Type":"ContainerStarted","Data":"0c2a3e7fd558421654b5e51f803a6f9bd6669a0c895bef3921dd90d4bd3f047f"} Mar 09 16:39:47.884096 master-0 kubenswrapper[7604]: I0309 16:39:47.883493 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" event={"ID":"1e97466a-7c33-4efb-a961-14024d913a21","Type":"ContainerStarted","Data":"80e81fc634e2d51fb75c1e907d6821b7dc5082e80252cad19bc4cd366097168f"} Mar 09 16:39:47.894798 master-0 kubenswrapper[7604]: I0309 16:39:47.891374 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" event={"ID":"457f42a7-f14c-4d61-a87a-bc1ed422feed","Type":"ContainerStarted","Data":"40eca381762a44efcbf01051438a764f0ee5382fecc51a90df1360f0b48b1d11"} Mar 09 16:39:47.894798 master-0 kubenswrapper[7604]: I0309 16:39:47.894635 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:39:47.925444 master-0 kubenswrapper[7604]: I0309 16:39:47.925337 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" event={"ID":"5565c060-5952-4e85-8873-18bb80663924","Type":"ContainerStarted","Data":"d3838b4f8d4fcfdcb62a8e8ca57747e28a500271dc6855cb8bf0a8bcb56d0268"} Mar 09 16:39:47.944459 master-0 kubenswrapper[7604]: I0309 16:39:47.941496 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" event={"ID":"2e765395-7c6b-4cba-9a5a-37ba888722bb","Type":"ContainerStarted","Data":"025bf9d26459d8f3e6965c1eff269cca2544cfdba591f4c35817201e988bce8a"} Mar 09 16:39:47.952051 master-0 kubenswrapper[7604]: I0309 16:39:47.950718 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" event={"ID":"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d","Type":"ContainerStarted","Data":"8e2653c337bebb3ef2bd9d58eef75abf5db3fa265014d056a9e91ec758562644"} Mar 09 16:39:47.955486 master-0 kubenswrapper[7604]: I0309 16:39:47.952939 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:39:47.970543 master-0 kubenswrapper[7604]: I0309 16:39:47.967855 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" event={"ID":"8972b380-8f87-4b73-8f95-440d34d03884","Type":"ContainerStarted","Data":"383fdf743cd06d3dbf1ddd221601fa65f329a4387a69b89e5f873bdf3351e6d3"} Mar 09 16:39:47.987458 master-0 kubenswrapper[7604]: I0309 16:39:47.987117 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-4qg6v_a6cd9347-eec9-4549-9de4-6033112634ce/machine-api-operator/0.log" Mar 09 16:39:47.987899 master-0 kubenswrapper[7604]: I0309 16:39:47.987702 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" event={"ID":"a6cd9347-eec9-4549-9de4-6033112634ce","Type":"ContainerStarted","Data":"4c6a02534d52507dd35df401aae9ebaaee2742ea184511dffc78cecc84d136e5"} Mar 09 16:39:48.009704 master-0 kubenswrapper[7604]: I0309 16:39:48.009567 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" event={"ID":"34a4491c-12cc-4531-ad3e-246e93ed7842","Type":"ContainerStarted","Data":"22de771aeaa11d9515856e96727e7a14d6c0cdb2e394332c4a09f85881ae1c19"} Mar 09 16:39:48.026620 master-0 kubenswrapper[7604]: I0309 16:39:48.026539 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv_f965b971-7e9a-4513-8450-b2b527609bd6/package-server-manager/0.log" Mar 09 16:39:48.034443 master-0 kubenswrapper[7604]: I0309 16:39:48.034300 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" event={"ID":"f965b971-7e9a-4513-8450-b2b527609bd6","Type":"ContainerStarted","Data":"ebc67536376e1479cf4756e310d9b04391ee37086ecec8bc656755e7081edb75"} Mar 09 16:39:48.035669 master-0 kubenswrapper[7604]: I0309 16:39:48.035617 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:39:50.419626 master-0 kubenswrapper[7604]: I0309 16:39:50.419557 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:39:50.420535 master-0 kubenswrapper[7604]: I0309 16:39:50.420513 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:50.420847 master-0 kubenswrapper[7604]: E0309 16:39:50.420799 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:50.424091 master-0 kubenswrapper[7604]: I0309 16:39:50.424059 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:39:51.057139 master-0 kubenswrapper[7604]: I0309 16:39:51.057064 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:39:51.057705 master-0 kubenswrapper[7604]: E0309 16:39:51.057406 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:39:52.418517 master-0 kubenswrapper[7604]: I0309 16:39:52.418418 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: E0309 16:39:52.418858 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d4d5a2-1544-4443-acc5-d7eee167a29c" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.418875 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d4d5a2-1544-4443-acc5-d7eee167a29c" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: E0309 16:39:52.418891 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.418897 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: E0309 16:39:52.418912 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.418920 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: E0309 16:39:52.418947 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.418954 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.419098 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d4d5a2-1544-4443-acc5-d7eee167a29c" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.419118 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.419134 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" containerName="installer" Mar 09 16:39:52.419371 master-0 kubenswrapper[7604]: I0309 16:39:52.419143 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerName="installer" Mar 09 16:39:52.419931 master-0 kubenswrapper[7604]: I0309 16:39:52.419787 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.423307 master-0 kubenswrapper[7604]: I0309 16:39:52.423221 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 09 16:39:52.424494 master-0 kubenswrapper[7604]: I0309 16:39:52.423245 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7kft5" Mar 09 16:39:52.432241 master-0 kubenswrapper[7604]: I0309 16:39:52.432170 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:39:52.436366 master-0 kubenswrapper[7604]: I0309 16:39:52.435730 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4320d00b-9add-4224-9632-d8422fec5b0b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.436366 master-0 kubenswrapper[7604]: I0309 16:39:52.435822 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.436366 master-0 kubenswrapper[7604]: I0309 16:39:52.435866 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-var-lock\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.441992 master-0 kubenswrapper[7604]: I0309 16:39:52.441899 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 09 16:39:52.538179 master-0 kubenswrapper[7604]: I0309 16:39:52.538025 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4320d00b-9add-4224-9632-d8422fec5b0b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.538179 master-0 kubenswrapper[7604]: I0309 16:39:52.538165 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.538179 master-0 kubenswrapper[7604]: I0309 16:39:52.538214 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-var-lock\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.538953 master-0 kubenswrapper[7604]: I0309 16:39:52.538572 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.538953 master-0 kubenswrapper[7604]: I0309 16:39:52.538649 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-var-lock\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.565238 master-0 kubenswrapper[7604]: I0309 16:39:52.565136 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4320d00b-9add-4224-9632-d8422fec5b0b-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:52.754608 master-0 kubenswrapper[7604]: I0309 16:39:52.754529 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:39:53.222814 master-0 kubenswrapper[7604]: I0309 16:39:53.222723 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 09 16:39:54.101139 master-0 kubenswrapper[7604]: I0309 16:39:54.101057 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4320d00b-9add-4224-9632-d8422fec5b0b","Type":"ContainerStarted","Data":"2ef11d86aa2070868cc06b6da364e1a472811e4f000a136a0ce2bb7d159b1085"} Mar 09 16:39:54.101139 master-0 kubenswrapper[7604]: I0309 16:39:54.101136 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4320d00b-9add-4224-9632-d8422fec5b0b","Type":"ContainerStarted","Data":"cb786c3ebfc5b302bbf77e532b601727b3659c5edd9e40f135a583f9877e73b6"} Mar 09 16:39:54.137893 master-0 kubenswrapper[7604]: I0309 16:39:54.137770 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.137736632 podStartE2EDuration="2.137736632s" podCreationTimestamp="2026-03-09 16:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:39:54.134004716 +0000 UTC m=+851.187974159" watchObservedRunningTime="2026-03-09 16:39:54.137736632 +0000 UTC m=+851.191706055" Mar 09 16:40:05.112372 master-0 kubenswrapper[7604]: I0309 16:40:05.112260 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:40:05.113265 master-0 kubenswrapper[7604]: E0309 16:40:05.112626 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:40:18.112144 master-0 kubenswrapper[7604]: I0309 16:40:18.112056 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:40:18.113043 master-0 kubenswrapper[7604]: E0309 16:40:18.112443 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 09 16:40:18.292481 master-0 kubenswrapper[7604]: I0309 16:40:18.292391 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:40:26.352160 master-0 kubenswrapper[7604]: I0309 16:40:26.352073 7604 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 09 16:40:26.352792 master-0 kubenswrapper[7604]: I0309 16:40:26.352402 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138" gracePeriod=30 Mar 09 16:40:26.353855 master-0 kubenswrapper[7604]: I0309 16:40:26.353801 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:40:26.354304 master-0 kubenswrapper[7604]: E0309 16:40:26.354247 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354304 master-0 kubenswrapper[7604]: I0309 16:40:26.354271 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354304 master-0 kubenswrapper[7604]: E0309 16:40:26.354289 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 09 16:40:26.354304 master-0 kubenswrapper[7604]: I0309 16:40:26.354296 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: E0309 16:40:26.354325 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: I0309 16:40:26.354334 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: E0309 16:40:26.354346 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: I0309 16:40:26.354353 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: E0309 16:40:26.354364 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: I0309 16:40:26.354372 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: E0309 16:40:26.354380 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: I0309 16:40:26.354489 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: E0309 16:40:26.354519 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: I0309 16:40:26.354526 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: E0309 16:40:26.354538 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.354709 master-0 kubenswrapper[7604]: I0309 16:40:26.354564 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.354738 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.354752 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.354761 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.354772 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.354806 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.355124 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.355136 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 09 16:40:26.355653 master-0 kubenswrapper[7604]: I0309 16:40:26.355151 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 09 16:40:26.356519 master-0 kubenswrapper[7604]: I0309 16:40:26.356484 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.399140 master-0 kubenswrapper[7604]: I0309 16:40:26.399033 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:40:26.429837 master-0 kubenswrapper[7604]: I0309 16:40:26.429740 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.430310 master-0 kubenswrapper[7604]: I0309 16:40:26.430275 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.532343 master-0 kubenswrapper[7604]: I0309 16:40:26.532174 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.532771 master-0 kubenswrapper[7604]: I0309 16:40:26.532354 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.532771 master-0 kubenswrapper[7604]: I0309 16:40:26.532402 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.532771 master-0 kubenswrapper[7604]: I0309 16:40:26.532536 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.541300 master-0 kubenswrapper[7604]: I0309 16:40:26.541203 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:40:26.572007 master-0 kubenswrapper[7604]: I0309 16:40:26.571900 7604 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="57e06ba6-2a6c-4787-b853-dc846bbbf36b" Mar 09 16:40:26.634275 master-0 kubenswrapper[7604]: I0309 16:40:26.633953 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 09 16:40:26.634275 master-0 kubenswrapper[7604]: I0309 16:40:26.634067 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 09 16:40:26.634275 master-0 kubenswrapper[7604]: I0309 16:40:26.634158 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 09 16:40:26.634275 master-0 kubenswrapper[7604]: I0309 16:40:26.634220 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 09 16:40:26.634856 master-0 kubenswrapper[7604]: I0309 16:40:26.634299 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 09 16:40:26.634856 master-0 kubenswrapper[7604]: I0309 16:40:26.634833 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:26.634950 master-0 kubenswrapper[7604]: I0309 16:40:26.634884 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:26.634950 master-0 kubenswrapper[7604]: I0309 16:40:26.634911 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:26.634950 master-0 kubenswrapper[7604]: I0309 16:40:26.634937 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:26.635167 master-0 kubenswrapper[7604]: I0309 16:40:26.634962 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:26.694133 master-0 kubenswrapper[7604]: I0309 16:40:26.694030 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:26.737161 master-0 kubenswrapper[7604]: I0309 16:40:26.736885 7604 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:26.737161 master-0 kubenswrapper[7604]: I0309 16:40:26.736919 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:26.737161 master-0 kubenswrapper[7604]: I0309 16:40:26.736932 7604 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:26.737161 master-0 kubenswrapper[7604]: I0309 16:40:26.736948 7604 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:26.737161 master-0 kubenswrapper[7604]: I0309 16:40:26.736961 7604 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:27.123243 master-0 kubenswrapper[7604]: I0309 16:40:27.123006 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 09 16:40:27.123532 master-0 kubenswrapper[7604]: I0309 16:40:27.123450 7604 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 09 16:40:27.156140 master-0 kubenswrapper[7604]: I0309 16:40:27.155905 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 09 16:40:27.156140 master-0 kubenswrapper[7604]: I0309 16:40:27.155943 7604 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="57e06ba6-2a6c-4787-b853-dc846bbbf36b" Mar 09 16:40:27.159159 master-0 kubenswrapper[7604]: I0309 16:40:27.159089 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 09 16:40:27.159159 master-0 kubenswrapper[7604]: I0309 16:40:27.159146 7604 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="57e06ba6-2a6c-4787-b853-dc846bbbf36b" Mar 09 16:40:27.380018 master-0 kubenswrapper[7604]: I0309 16:40:27.379910 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4e5546da7de03c762cdb76021b225c2b","Type":"ContainerStarted","Data":"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672"} Mar 09 16:40:27.380944 master-0 kubenswrapper[7604]: I0309 16:40:27.380078 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4e5546da7de03c762cdb76021b225c2b","Type":"ContainerStarted","Data":"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540"} Mar 09 16:40:27.380944 master-0 kubenswrapper[7604]: I0309 16:40:27.380096 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4e5546da7de03c762cdb76021b225c2b","Type":"ContainerStarted","Data":"7f2531089132b9bca4733555d74da5177ca5f7dcf195edb08aa4b4bc65281b29"} Mar 09 16:40:27.382616 master-0 kubenswrapper[7604]: I0309 16:40:27.382524 7604 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138" exitCode=0 Mar 09 16:40:27.382616 master-0 kubenswrapper[7604]: I0309 16:40:27.382616 7604 scope.go:117] "RemoveContainer" containerID="01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138" Mar 09 16:40:27.382901 master-0 kubenswrapper[7604]: I0309 16:40:27.382753 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 09 16:40:27.386775 master-0 kubenswrapper[7604]: I0309 16:40:27.386715 7604 generic.go:334] "Generic (PLEG): container finished" podID="4320d00b-9add-4224-9632-d8422fec5b0b" containerID="2ef11d86aa2070868cc06b6da364e1a472811e4f000a136a0ce2bb7d159b1085" exitCode=0 Mar 09 16:40:27.386927 master-0 kubenswrapper[7604]: I0309 16:40:27.386800 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4320d00b-9add-4224-9632-d8422fec5b0b","Type":"ContainerDied","Data":"2ef11d86aa2070868cc06b6da364e1a472811e4f000a136a0ce2bb7d159b1085"} Mar 09 16:40:27.423748 master-0 kubenswrapper[7604]: I0309 16:40:27.423683 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:40:27.447192 master-0 kubenswrapper[7604]: I0309 16:40:27.447139 7604 scope.go:117] "RemoveContainer" containerID="f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613" Mar 09 16:40:27.485755 master-0 kubenswrapper[7604]: I0309 16:40:27.485704 7604 scope.go:117] "RemoveContainer" containerID="01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138" Mar 09 16:40:27.486384 master-0 kubenswrapper[7604]: E0309 16:40:27.486338 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138\": container with ID starting with 01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138 not found: ID does not exist" containerID="01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138" Mar 09 16:40:27.487480 master-0 kubenswrapper[7604]: I0309 16:40:27.486389 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138"} err="failed to get container status \"01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138\": rpc error: code = NotFound desc = could not find container \"01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138\": container with ID starting with 01df390e15079bc09afa9fcd81df47d48a8bd910cebbe1883b460df908697138 not found: ID does not exist" Mar 09 16:40:27.487480 master-0 kubenswrapper[7604]: I0309 16:40:27.486474 7604 scope.go:117] "RemoveContainer" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:40:27.487480 master-0 kubenswrapper[7604]: E0309 16:40:27.486810 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e\": container with ID starting with 03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e not found: ID does not exist" containerID="03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e" Mar 09 16:40:27.487480 master-0 kubenswrapper[7604]: I0309 16:40:27.486845 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e"} err="failed to get container status \"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e\": rpc error: code = NotFound desc = could not find container \"03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e\": container with ID starting with 03ba37f69e719d1e6c66f76a7d5d44758c49ab836241ba30b620097498bb268e not found: ID does not exist" Mar 09 16:40:27.487480 master-0 kubenswrapper[7604]: I0309 16:40:27.486874 7604 scope.go:117] "RemoveContainer" containerID="f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613" Mar 09 16:40:27.490357 master-0 kubenswrapper[7604]: E0309 16:40:27.490314 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613\": container with ID starting with f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613 not found: ID does not exist" containerID="f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613" Mar 09 16:40:27.490528 master-0 kubenswrapper[7604]: I0309 16:40:27.490367 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613"} err="failed to get container status \"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613\": rpc error: code = NotFound desc = could not find container \"f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613\": container with ID starting with f16c8e778e625fa48d824ecbcba7dad2feb6c09865876cd44ae2362fa9942613 not found: ID does not exist" Mar 09 16:40:28.406991 master-0 kubenswrapper[7604]: I0309 16:40:28.406906 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4e5546da7de03c762cdb76021b225c2b","Type":"ContainerStarted","Data":"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1"} Mar 09 16:40:28.406991 master-0 kubenswrapper[7604]: I0309 16:40:28.406966 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4e5546da7de03c762cdb76021b225c2b","Type":"ContainerStarted","Data":"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519"} Mar 09 16:40:28.447118 master-0 kubenswrapper[7604]: I0309 16:40:28.447005 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.446970167 podStartE2EDuration="2.446970167s" podCreationTimestamp="2026-03-09 16:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:40:28.440605006 +0000 UTC m=+885.494574449" watchObservedRunningTime="2026-03-09 16:40:28.446970167 +0000 UTC m=+885.500939600" Mar 09 16:40:28.721897 master-0 kubenswrapper[7604]: I0309 16:40:28.721854 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:40:28.771889 master-0 kubenswrapper[7604]: I0309 16:40:28.771346 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-var-lock\") pod \"4320d00b-9add-4224-9632-d8422fec5b0b\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " Mar 09 16:40:28.771889 master-0 kubenswrapper[7604]: I0309 16:40:28.771525 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-kubelet-dir\") pod \"4320d00b-9add-4224-9632-d8422fec5b0b\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " Mar 09 16:40:28.771889 master-0 kubenswrapper[7604]: I0309 16:40:28.771554 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-var-lock" (OuterVolumeSpecName: "var-lock") pod "4320d00b-9add-4224-9632-d8422fec5b0b" (UID: "4320d00b-9add-4224-9632-d8422fec5b0b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:28.771889 master-0 kubenswrapper[7604]: I0309 16:40:28.771653 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4320d00b-9add-4224-9632-d8422fec5b0b-kube-api-access\") pod \"4320d00b-9add-4224-9632-d8422fec5b0b\" (UID: \"4320d00b-9add-4224-9632-d8422fec5b0b\") " Mar 09 16:40:28.771889 master-0 kubenswrapper[7604]: I0309 16:40:28.771734 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4320d00b-9add-4224-9632-d8422fec5b0b" (UID: "4320d00b-9add-4224-9632-d8422fec5b0b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:40:28.772318 master-0 kubenswrapper[7604]: I0309 16:40:28.772153 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:28.772318 master-0 kubenswrapper[7604]: I0309 16:40:28.772172 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4320d00b-9add-4224-9632-d8422fec5b0b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:28.776360 master-0 kubenswrapper[7604]: I0309 16:40:28.775820 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4320d00b-9add-4224-9632-d8422fec5b0b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4320d00b-9add-4224-9632-d8422fec5b0b" (UID: "4320d00b-9add-4224-9632-d8422fec5b0b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:40:28.873863 master-0 kubenswrapper[7604]: I0309 16:40:28.873784 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4320d00b-9add-4224-9632-d8422fec5b0b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:29.419398 master-0 kubenswrapper[7604]: I0309 16:40:29.419207 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4320d00b-9add-4224-9632-d8422fec5b0b","Type":"ContainerDied","Data":"cb786c3ebfc5b302bbf77e532b601727b3659c5edd9e40f135a583f9877e73b6"} Mar 09 16:40:29.419398 master-0 kubenswrapper[7604]: I0309 16:40:29.419292 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb786c3ebfc5b302bbf77e532b601727b3659c5edd9e40f135a583f9877e73b6" Mar 09 16:40:29.419398 master-0 kubenswrapper[7604]: I0309 16:40:29.419294 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:40:33.448877 master-0 kubenswrapper[7604]: I0309 16:40:33.448720 7604 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="4c1869d3a7ddcc58f5543caea28428d5b999484c891498c9388eaad6a5d85b10" exitCode=0 Mar 09 16:40:33.448877 master-0 kubenswrapper[7604]: I0309 16:40:33.448812 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerDied","Data":"4c1869d3a7ddcc58f5543caea28428d5b999484c891498c9388eaad6a5d85b10"} Mar 09 16:40:33.449582 master-0 kubenswrapper[7604]: I0309 16:40:33.448927 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"0a6dcd96dc0badcacb59d76f3cf7625d66b40a5e5d0f154a56f4d766f6cd06e0"} Mar 09 16:40:33.449582 master-0 kubenswrapper[7604]: I0309 16:40:33.448972 7604 scope.go:117] "RemoveContainer" containerID="a77339a51d7a4ed44c4a920634c7235e0fcfd324430f106c1b1bcdd4dc11bacc" Mar 09 16:40:33.726602 master-0 kubenswrapper[7604]: I0309 16:40:33.725570 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:40:33.729022 master-0 kubenswrapper[7604]: I0309 16:40:33.728641 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:33.729022 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:33.729022 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:33.729022 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:33.729022 master-0 kubenswrapper[7604]: I0309 16:40:33.728729 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:34.728583 master-0 kubenswrapper[7604]: I0309 16:40:34.728484 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:34.728583 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:34.728583 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:34.728583 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:34.728583 master-0 kubenswrapper[7604]: I0309 16:40:34.728591 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:35.729264 master-0 kubenswrapper[7604]: I0309 16:40:35.729105 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:35.729264 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:35.729264 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:35.729264 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:35.730260 master-0 kubenswrapper[7604]: I0309 16:40:35.729268 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:36.695072 master-0 kubenswrapper[7604]: I0309 16:40:36.694957 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:36.695072 master-0 kubenswrapper[7604]: I0309 16:40:36.695063 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:36.695072 master-0 kubenswrapper[7604]: I0309 16:40:36.695085 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:36.695072 master-0 kubenswrapper[7604]: I0309 16:40:36.695103 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:36.699675 master-0 kubenswrapper[7604]: I0309 16:40:36.699610 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:36.700176 master-0 kubenswrapper[7604]: I0309 16:40:36.700125 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:36.729557 master-0 kubenswrapper[7604]: I0309 16:40:36.729490 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:36.729557 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:36.729557 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:36.729557 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:36.729557 master-0 kubenswrapper[7604]: I0309 16:40:36.729558 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:37.487393 master-0 kubenswrapper[7604]: I0309 16:40:37.487281 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:37.488810 master-0 kubenswrapper[7604]: I0309 16:40:37.488739 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:40:37.728040 master-0 kubenswrapper[7604]: I0309 16:40:37.727953 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:37.728040 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:37.728040 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:37.728040 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:37.728482 master-0 kubenswrapper[7604]: I0309 16:40:37.728051 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:38.727783 master-0 kubenswrapper[7604]: I0309 16:40:38.726737 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:40:38.729805 master-0 kubenswrapper[7604]: I0309 16:40:38.729722 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:38.729805 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:38.729805 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:38.729805 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:38.730057 master-0 kubenswrapper[7604]: I0309 16:40:38.729827 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:39.729056 master-0 kubenswrapper[7604]: I0309 16:40:39.728983 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:39.729056 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:39.729056 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:39.729056 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:39.730143 master-0 kubenswrapper[7604]: I0309 16:40:39.729083 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:40.729876 master-0 kubenswrapper[7604]: I0309 16:40:40.729808 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:40.729876 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:40.729876 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:40.729876 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:40.730579 master-0 kubenswrapper[7604]: I0309 16:40:40.729885 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:41.730168 master-0 kubenswrapper[7604]: I0309 16:40:41.730071 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:41.730168 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:41.730168 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:41.730168 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:41.731109 master-0 kubenswrapper[7604]: I0309 16:40:41.730185 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:42.728716 master-0 kubenswrapper[7604]: I0309 16:40:42.728635 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:42.728716 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:42.728716 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:42.728716 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:42.728716 master-0 kubenswrapper[7604]: I0309 16:40:42.728714 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:43.728956 master-0 kubenswrapper[7604]: I0309 16:40:43.728507 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:43.728956 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:43.728956 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:43.728956 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:43.728956 master-0 kubenswrapper[7604]: I0309 16:40:43.728568 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:44.728568 master-0 kubenswrapper[7604]: I0309 16:40:44.728477 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:44.728568 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:44.728568 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:44.728568 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:44.729002 master-0 kubenswrapper[7604]: I0309 16:40:44.728576 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:45.729156 master-0 kubenswrapper[7604]: I0309 16:40:45.729051 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:45.729156 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:45.729156 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:45.729156 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:45.730175 master-0 kubenswrapper[7604]: I0309 16:40:45.729174 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:45.797001 master-0 kubenswrapper[7604]: I0309 16:40:45.796910 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rwvsh"] Mar 09 16:40:45.797448 master-0 kubenswrapper[7604]: E0309 16:40:45.797387 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4320d00b-9add-4224-9632-d8422fec5b0b" containerName="installer" Mar 09 16:40:45.797448 master-0 kubenswrapper[7604]: I0309 16:40:45.797406 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4320d00b-9add-4224-9632-d8422fec5b0b" containerName="installer" Mar 09 16:40:45.797634 master-0 kubenswrapper[7604]: I0309 16:40:45.797601 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4320d00b-9add-4224-9632-d8422fec5b0b" containerName="installer" Mar 09 16:40:45.798293 master-0 kubenswrapper[7604]: I0309 16:40:45.798262 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.802990 master-0 kubenswrapper[7604]: I0309 16:40:45.802928 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-tdm87" Mar 09 16:40:45.803095 master-0 kubenswrapper[7604]: I0309 16:40:45.802995 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 09 16:40:45.862818 master-0 kubenswrapper[7604]: I0309 16:40:45.862717 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrgc\" (UniqueName: \"kubernetes.io/projected/ec3f7633-1338-4282-bf19-5c0e1aa1b074-kube-api-access-ghrgc\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.863174 master-0 kubenswrapper[7604]: I0309 16:40:45.862866 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3f7633-1338-4282-bf19-5c0e1aa1b074-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.863174 master-0 kubenswrapper[7604]: I0309 16:40:45.862920 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec3f7633-1338-4282-bf19-5c0e1aa1b074-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.863174 master-0 kubenswrapper[7604]: I0309 16:40:45.862957 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec3f7633-1338-4282-bf19-5c0e1aa1b074-ready\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.965145 master-0 kubenswrapper[7604]: I0309 16:40:45.965031 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec3f7633-1338-4282-bf19-5c0e1aa1b074-ready\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.965145 master-0 kubenswrapper[7604]: I0309 16:40:45.965128 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrgc\" (UniqueName: \"kubernetes.io/projected/ec3f7633-1338-4282-bf19-5c0e1aa1b074-kube-api-access-ghrgc\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.965609 master-0 kubenswrapper[7604]: I0309 16:40:45.965298 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3f7633-1338-4282-bf19-5c0e1aa1b074-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.965609 master-0 kubenswrapper[7604]: I0309 16:40:45.965375 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec3f7633-1338-4282-bf19-5c0e1aa1b074-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.965747 master-0 kubenswrapper[7604]: I0309 16:40:45.965666 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3f7633-1338-4282-bf19-5c0e1aa1b074-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.965835 master-0 kubenswrapper[7604]: I0309 16:40:45.965798 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec3f7633-1338-4282-bf19-5c0e1aa1b074-ready\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:45.966627 master-0 kubenswrapper[7604]: I0309 16:40:45.966576 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec3f7633-1338-4282-bf19-5c0e1aa1b074-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:46.316337 master-0 kubenswrapper[7604]: I0309 16:40:46.316259 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrgc\" (UniqueName: \"kubernetes.io/projected/ec3f7633-1338-4282-bf19-5c0e1aa1b074-kube-api-access-ghrgc\") pod \"cni-sysctl-allowlist-ds-rwvsh\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:46.419147 master-0 kubenswrapper[7604]: I0309 16:40:46.419025 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:46.556603 master-0 kubenswrapper[7604]: I0309 16:40:46.556521 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" event={"ID":"ec3f7633-1338-4282-bf19-5c0e1aa1b074","Type":"ContainerStarted","Data":"6f2b5a87ea20750abab2c59a2b4adbec95c723b4adeec860456194857551aacb"} Mar 09 16:40:46.728725 master-0 kubenswrapper[7604]: I0309 16:40:46.728657 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:46.728725 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:46.728725 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:46.728725 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:46.729290 master-0 kubenswrapper[7604]: I0309 16:40:46.729238 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:47.568017 master-0 kubenswrapper[7604]: I0309 16:40:47.567935 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" event={"ID":"ec3f7633-1338-4282-bf19-5c0e1aa1b074","Type":"ContainerStarted","Data":"113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05"} Mar 09 16:40:47.568340 master-0 kubenswrapper[7604]: I0309 16:40:47.568206 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:47.592750 master-0 kubenswrapper[7604]: I0309 16:40:47.592671 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:40:47.728613 master-0 kubenswrapper[7604]: I0309 16:40:47.728517 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:47.728613 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:47.728613 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:47.728613 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:47.729026 master-0 kubenswrapper[7604]: I0309 16:40:47.728657 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:47.800204 master-0 kubenswrapper[7604]: I0309 16:40:47.800076 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" podStartSLOduration=2.800048265 podStartE2EDuration="2.800048265s" podCreationTimestamp="2026-03-09 16:40:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:40:47.795737904 +0000 UTC m=+904.849707337" watchObservedRunningTime="2026-03-09 16:40:47.800048265 +0000 UTC m=+904.854017688" Mar 09 16:40:48.729046 master-0 kubenswrapper[7604]: I0309 16:40:48.728984 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:48.729046 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:48.729046 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:48.729046 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:48.729388 master-0 kubenswrapper[7604]: I0309 16:40:48.729074 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:49.728199 master-0 kubenswrapper[7604]: I0309 16:40:49.728106 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:49.728199 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:49.728199 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:49.728199 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:49.728199 master-0 kubenswrapper[7604]: I0309 16:40:49.728200 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:50.728631 master-0 kubenswrapper[7604]: I0309 16:40:50.728544 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:50.728631 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:50.728631 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:50.728631 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:50.729684 master-0 kubenswrapper[7604]: I0309 16:40:50.728661 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:51.728916 master-0 kubenswrapper[7604]: I0309 16:40:51.728823 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:51.728916 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:51.728916 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:51.728916 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:51.729894 master-0 kubenswrapper[7604]: I0309 16:40:51.728972 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:52.499680 master-0 kubenswrapper[7604]: I0309 16:40:52.499589 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rwvsh"] Mar 09 16:40:52.500792 master-0 kubenswrapper[7604]: I0309 16:40:52.500465 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" gracePeriod=30 Mar 09 16:40:52.512372 master-0 kubenswrapper[7604]: I0309 16:40:52.512293 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-gwf86"] Mar 09 16:40:52.512730 master-0 kubenswrapper[7604]: I0309 16:40:52.512662 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="telemeter-client" containerID="cri-o://45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" gracePeriod=30 Mar 09 16:40:52.512826 master-0 kubenswrapper[7604]: I0309 16:40:52.512713 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="kube-rbac-proxy" containerID="cri-o://3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" gracePeriod=30 Mar 09 16:40:52.512826 master-0 kubenswrapper[7604]: I0309 16:40:52.512812 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="reload" containerID="cri-o://653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" gracePeriod=30 Mar 09 16:40:52.728774 master-0 kubenswrapper[7604]: I0309 16:40:52.728673 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:52.728774 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:52.728774 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:52.728774 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:52.728774 master-0 kubenswrapper[7604]: I0309 16:40:52.728782 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:53.011699 master-0 kubenswrapper[7604]: I0309 16:40:53.011599 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d4f6dc665-gwf86_268b582b-efd2-44be-9e2a-3ee7322603c9/telemeter-client/0.log" Mar 09 16:40:53.011699 master-0 kubenswrapper[7604]: I0309 16:40:53.011722 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:40:53.200161 master-0 kubenswrapper[7604]: I0309 16:40:53.199837 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-federate-client-tls\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.200161 master-0 kubenswrapper[7604]: I0309 16:40:53.200127 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-metrics-client-ca\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201569 master-0 kubenswrapper[7604]: I0309 16:40:53.200204 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-serving-certs-ca-bundle\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201569 master-0 kubenswrapper[7604]: I0309 16:40:53.200262 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-trusted-ca-bundle\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201569 master-0 kubenswrapper[7604]: I0309 16:40:53.200348 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-client-tls\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201569 master-0 kubenswrapper[7604]: I0309 16:40:53.200449 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201569 master-0 kubenswrapper[7604]: I0309 16:40:53.200532 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201569 master-0 kubenswrapper[7604]: I0309 16:40:53.200553 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lwx\" (UniqueName: \"kubernetes.io/projected/268b582b-efd2-44be-9e2a-3ee7322603c9-kube-api-access-q9lwx\") pod \"268b582b-efd2-44be-9e2a-3ee7322603c9\" (UID: \"268b582b-efd2-44be-9e2a-3ee7322603c9\") " Mar 09 16:40:53.201834 master-0 kubenswrapper[7604]: I0309 16:40:53.201575 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:40:53.201834 master-0 kubenswrapper[7604]: I0309 16:40:53.201642 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-serving-certs-ca-bundle" (OuterVolumeSpecName: "serving-certs-ca-bundle") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:40:53.201834 master-0 kubenswrapper[7604]: I0309 16:40:53.201751 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-trusted-ca-bundle" (OuterVolumeSpecName: "telemeter-trusted-ca-bundle") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "telemeter-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:40:53.202530 master-0 kubenswrapper[7604]: I0309 16:40:53.202477 7604 reconciler_common.go:293] "Volume detached for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.202611 master-0 kubenswrapper[7604]: I0309 16:40:53.202533 7604 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.202611 master-0 kubenswrapper[7604]: I0309 16:40:53.202557 7604 reconciler_common.go:293] "Volume detached for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/268b582b-efd2-44be-9e2a-3ee7322603c9-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.204455 master-0 kubenswrapper[7604]: I0309 16:40:53.204343 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-federate-client-tls" (OuterVolumeSpecName: "federate-client-tls") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "federate-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:40:53.205131 master-0 kubenswrapper[7604]: I0309 16:40:53.205072 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268b582b-efd2-44be-9e2a-3ee7322603c9-kube-api-access-q9lwx" (OuterVolumeSpecName: "kube-api-access-q9lwx") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "kube-api-access-q9lwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:40:53.205387 master-0 kubenswrapper[7604]: I0309 16:40:53.205307 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client" (OuterVolumeSpecName: "secret-telemeter-client") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "secret-telemeter-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:40:53.205720 master-0 kubenswrapper[7604]: I0309 16:40:53.205673 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client-kube-rbac-proxy-config" (OuterVolumeSpecName: "secret-telemeter-client-kube-rbac-proxy-config") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "secret-telemeter-client-kube-rbac-proxy-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:40:53.206236 master-0 kubenswrapper[7604]: I0309 16:40:53.206198 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-client-tls" (OuterVolumeSpecName: "telemeter-client-tls") pod "268b582b-efd2-44be-9e2a-3ee7322603c9" (UID: "268b582b-efd2-44be-9e2a-3ee7322603c9"). InnerVolumeSpecName "telemeter-client-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:40:53.304592 master-0 kubenswrapper[7604]: I0309 16:40:53.304489 7604 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.304592 master-0 kubenswrapper[7604]: I0309 16:40:53.304570 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lwx\" (UniqueName: \"kubernetes.io/projected/268b582b-efd2-44be-9e2a-3ee7322603c9-kube-api-access-q9lwx\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.304592 master-0 kubenswrapper[7604]: I0309 16:40:53.304596 7604 reconciler_common.go:293] "Volume detached for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-federate-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.304592 master-0 kubenswrapper[7604]: I0309 16:40:53.304609 7604 reconciler_common.go:293] "Volume detached for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-telemeter-client-tls\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.304592 master-0 kubenswrapper[7604]: I0309 16:40:53.304625 7604 reconciler_common.go:293] "Volume detached for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/268b582b-efd2-44be-9e2a-3ee7322603c9-secret-telemeter-client-kube-rbac-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:40:53.620871 master-0 kubenswrapper[7604]: I0309 16:40:53.620817 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d4f6dc665-gwf86_268b582b-efd2-44be-9e2a-3ee7322603c9/telemeter-client/0.log" Mar 09 16:40:53.621251 master-0 kubenswrapper[7604]: I0309 16:40:53.621221 7604 generic.go:334] "Generic (PLEG): container finished" podID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerID="3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" exitCode=0 Mar 09 16:40:53.621323 master-0 kubenswrapper[7604]: I0309 16:40:53.621310 7604 generic.go:334] "Generic (PLEG): container finished" podID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerID="653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" exitCode=0 Mar 09 16:40:53.621390 master-0 kubenswrapper[7604]: I0309 16:40:53.621378 7604 generic.go:334] "Generic (PLEG): container finished" podID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerID="45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" exitCode=2 Mar 09 16:40:53.621684 master-0 kubenswrapper[7604]: I0309 16:40:53.621531 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerDied","Data":"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9"} Mar 09 16:40:53.621761 master-0 kubenswrapper[7604]: I0309 16:40:53.621698 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerDied","Data":"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe"} Mar 09 16:40:53.621761 master-0 kubenswrapper[7604]: I0309 16:40:53.621726 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerDied","Data":"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5"} Mar 09 16:40:53.621761 master-0 kubenswrapper[7604]: I0309 16:40:53.621747 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" event={"ID":"268b582b-efd2-44be-9e2a-3ee7322603c9","Type":"ContainerDied","Data":"cefc455c07dd55ee166873c714c208ba56515212dc9b418766d04bbb74b92132"} Mar 09 16:40:53.621854 master-0 kubenswrapper[7604]: I0309 16:40:53.621755 7604 scope.go:117] "RemoveContainer" containerID="3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" Mar 09 16:40:53.622392 master-0 kubenswrapper[7604]: I0309 16:40:53.621967 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d4f6dc665-gwf86" Mar 09 16:40:53.643785 master-0 kubenswrapper[7604]: I0309 16:40:53.643720 7604 scope.go:117] "RemoveContainer" containerID="653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" Mar 09 16:40:53.671540 master-0 kubenswrapper[7604]: I0309 16:40:53.671134 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-gwf86"] Mar 09 16:40:53.678659 master-0 kubenswrapper[7604]: I0309 16:40:53.678542 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-gwf86"] Mar 09 16:40:53.685345 master-0 kubenswrapper[7604]: I0309 16:40:53.685302 7604 scope.go:117] "RemoveContainer" containerID="45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" Mar 09 16:40:53.718970 master-0 kubenswrapper[7604]: I0309 16:40:53.718899 7604 scope.go:117] "RemoveContainer" containerID="3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" Mar 09 16:40:53.719680 master-0 kubenswrapper[7604]: E0309 16:40:53.719640 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": container with ID starting with 3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9 not found: ID does not exist" containerID="3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" Mar 09 16:40:53.719796 master-0 kubenswrapper[7604]: I0309 16:40:53.719682 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9"} err="failed to get container status \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": rpc error: code = NotFound desc = could not find container \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": container with ID starting with 3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9 not found: ID does not exist" Mar 09 16:40:53.719796 master-0 kubenswrapper[7604]: I0309 16:40:53.719713 7604 scope.go:117] "RemoveContainer" containerID="653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" Mar 09 16:40:53.720272 master-0 kubenswrapper[7604]: E0309 16:40:53.720200 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": container with ID starting with 653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe not found: ID does not exist" containerID="653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" Mar 09 16:40:53.720272 master-0 kubenswrapper[7604]: I0309 16:40:53.720237 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe"} err="failed to get container status \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": rpc error: code = NotFound desc = could not find container \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": container with ID starting with 653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe not found: ID does not exist" Mar 09 16:40:53.720765 master-0 kubenswrapper[7604]: I0309 16:40:53.720258 7604 scope.go:117] "RemoveContainer" containerID="45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" Mar 09 16:40:53.721116 master-0 kubenswrapper[7604]: E0309 16:40:53.721045 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": container with ID starting with 45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5 not found: ID does not exist" containerID="45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" Mar 09 16:40:53.721116 master-0 kubenswrapper[7604]: I0309 16:40:53.721082 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5"} err="failed to get container status \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": rpc error: code = NotFound desc = could not find container \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": container with ID starting with 45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5 not found: ID does not exist" Mar 09 16:40:53.721116 master-0 kubenswrapper[7604]: I0309 16:40:53.721106 7604 scope.go:117] "RemoveContainer" containerID="3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" Mar 09 16:40:53.721801 master-0 kubenswrapper[7604]: I0309 16:40:53.721770 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9"} err="failed to get container status \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": rpc error: code = NotFound desc = could not find container \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": container with ID starting with 3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9 not found: ID does not exist" Mar 09 16:40:53.721891 master-0 kubenswrapper[7604]: I0309 16:40:53.721802 7604 scope.go:117] "RemoveContainer" containerID="653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" Mar 09 16:40:53.722114 master-0 kubenswrapper[7604]: I0309 16:40:53.722087 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe"} err="failed to get container status \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": rpc error: code = NotFound desc = could not find container \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": container with ID starting with 653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe not found: ID does not exist" Mar 09 16:40:53.722196 master-0 kubenswrapper[7604]: I0309 16:40:53.722113 7604 scope.go:117] "RemoveContainer" containerID="45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" Mar 09 16:40:53.722749 master-0 kubenswrapper[7604]: I0309 16:40:53.722654 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5"} err="failed to get container status \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": rpc error: code = NotFound desc = could not find container \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": container with ID starting with 45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5 not found: ID does not exist" Mar 09 16:40:53.722861 master-0 kubenswrapper[7604]: I0309 16:40:53.722750 7604 scope.go:117] "RemoveContainer" containerID="3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9" Mar 09 16:40:53.723123 master-0 kubenswrapper[7604]: I0309 16:40:53.723080 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9"} err="failed to get container status \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": rpc error: code = NotFound desc = could not find container \"3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9\": container with ID starting with 3319c4989e42130e7a8ac84f9d57fe6f5f0b732d1a0ed88f42b0a2120832b5a9 not found: ID does not exist" Mar 09 16:40:53.723123 master-0 kubenswrapper[7604]: I0309 16:40:53.723110 7604 scope.go:117] "RemoveContainer" containerID="653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe" Mar 09 16:40:53.723393 master-0 kubenswrapper[7604]: I0309 16:40:53.723358 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe"} err="failed to get container status \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": rpc error: code = NotFound desc = could not find container \"653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe\": container with ID starting with 653bbcb3f9a879f91976b338a8110097202c024e25a653a82ec61ea4a9354abe not found: ID does not exist" Mar 09 16:40:53.723529 master-0 kubenswrapper[7604]: I0309 16:40:53.723392 7604 scope.go:117] "RemoveContainer" containerID="45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5" Mar 09 16:40:53.723973 master-0 kubenswrapper[7604]: I0309 16:40:53.723748 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5"} err="failed to get container status \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": rpc error: code = NotFound desc = could not find container \"45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5\": container with ID starting with 45d527beb9d693f038961f0c65db693b8c8cf446f0882ac26f25582a3f2503c5 not found: ID does not exist" Mar 09 16:40:53.728762 master-0 kubenswrapper[7604]: I0309 16:40:53.728724 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:53.728762 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:53.728762 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:53.728762 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:53.729765 master-0 kubenswrapper[7604]: I0309 16:40:53.728797 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:54.728882 master-0 kubenswrapper[7604]: I0309 16:40:54.728753 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:54.728882 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:54.728882 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:54.728882 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:54.728882 master-0 kubenswrapper[7604]: I0309 16:40:54.728858 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:55.124709 master-0 kubenswrapper[7604]: I0309 16:40:55.124415 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" path="/var/lib/kubelet/pods/268b582b-efd2-44be-9e2a-3ee7322603c9/volumes" Mar 09 16:40:55.729114 master-0 kubenswrapper[7604]: I0309 16:40:55.729007 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:55.729114 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:55.729114 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:55.729114 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:55.730148 master-0 kubenswrapper[7604]: I0309 16:40:55.729154 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:56.423396 master-0 kubenswrapper[7604]: E0309 16:40:56.423133 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:40:56.426093 master-0 kubenswrapper[7604]: E0309 16:40:56.426028 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:40:56.428400 master-0 kubenswrapper[7604]: E0309 16:40:56.428321 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:40:56.428528 master-0 kubenswrapper[7604]: E0309 16:40:56.428453 7604 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" Mar 09 16:40:56.729918 master-0 kubenswrapper[7604]: I0309 16:40:56.729804 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:56.729918 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:56.729918 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:56.729918 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:56.730949 master-0 kubenswrapper[7604]: I0309 16:40:56.729945 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:57.729028 master-0 kubenswrapper[7604]: I0309 16:40:57.728932 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:57.729028 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:57.729028 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:57.729028 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:57.729655 master-0 kubenswrapper[7604]: I0309 16:40:57.729029 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:58.728794 master-0 kubenswrapper[7604]: I0309 16:40:58.728605 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:58.728794 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:58.728794 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:58.728794 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:58.729782 master-0 kubenswrapper[7604]: I0309 16:40:58.728792 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:40:59.729263 master-0 kubenswrapper[7604]: I0309 16:40:59.729179 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:40:59.729263 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:40:59.729263 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:40:59.729263 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:40:59.729263 master-0 kubenswrapper[7604]: I0309 16:40:59.729270 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:00.729836 master-0 kubenswrapper[7604]: I0309 16:41:00.729740 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:00.729836 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:00.729836 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:00.729836 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:00.729836 master-0 kubenswrapper[7604]: I0309 16:41:00.729844 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:01.728689 master-0 kubenswrapper[7604]: I0309 16:41:01.728594 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:01.728689 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:01.728689 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:01.728689 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:01.729090 master-0 kubenswrapper[7604]: I0309 16:41:01.728706 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:02.729222 master-0 kubenswrapper[7604]: I0309 16:41:02.729136 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:02.729222 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:02.729222 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:02.729222 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:02.730063 master-0 kubenswrapper[7604]: I0309 16:41:02.729239 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:03.020093 master-0 kubenswrapper[7604]: I0309 16:41:03.019844 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: E0309 16:41:03.020261 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="reload" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: I0309 16:41:03.020280 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="reload" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: E0309 16:41:03.020311 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="kube-rbac-proxy" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: I0309 16:41:03.020317 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="kube-rbac-proxy" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: E0309 16:41:03.020327 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="telemeter-client" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: I0309 16:41:03.020335 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="telemeter-client" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: I0309 16:41:03.020496 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="reload" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: I0309 16:41:03.020514 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="telemeter-client" Mar 09 16:41:03.020578 master-0 kubenswrapper[7604]: I0309 16:41:03.020527 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="268b582b-efd2-44be-9e2a-3ee7322603c9" containerName="kube-rbac-proxy" Mar 09 16:41:03.021243 master-0 kubenswrapper[7604]: I0309 16:41:03.021207 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.025339 master-0 kubenswrapper[7604]: I0309 16:41:03.025202 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7kft5" Mar 09 16:41:03.025739 master-0 kubenswrapper[7604]: I0309 16:41:03.025711 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 09 16:41:03.041977 master-0 kubenswrapper[7604]: I0309 16:41:03.041875 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 09 16:41:03.086042 master-0 kubenswrapper[7604]: I0309 16:41:03.085939 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.086732 master-0 kubenswrapper[7604]: I0309 16:41:03.086095 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8139a33-a597-4038-9bb4-183e72f498b4-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.086732 master-0 kubenswrapper[7604]: I0309 16:41:03.086161 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-var-lock\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.187612 master-0 kubenswrapper[7604]: I0309 16:41:03.187488 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-var-lock\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.188038 master-0 kubenswrapper[7604]: I0309 16:41:03.187647 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-var-lock\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.188038 master-0 kubenswrapper[7604]: I0309 16:41:03.187721 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.188038 master-0 kubenswrapper[7604]: I0309 16:41:03.187803 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8139a33-a597-4038-9bb4-183e72f498b4-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.188038 master-0 kubenswrapper[7604]: I0309 16:41:03.187817 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.208117 master-0 kubenswrapper[7604]: I0309 16:41:03.208018 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8139a33-a597-4038-9bb4-183e72f498b4-kube-api-access\") pod \"installer-4-master-0\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.345064 master-0 kubenswrapper[7604]: I0309 16:41:03.344860 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:03.728492 master-0 kubenswrapper[7604]: I0309 16:41:03.728404 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:03.728492 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:03.728492 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:03.728492 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:03.728902 master-0 kubenswrapper[7604]: I0309 16:41:03.728491 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:03.785767 master-0 kubenswrapper[7604]: I0309 16:41:03.785694 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 09 16:41:04.716330 master-0 kubenswrapper[7604]: I0309 16:41:04.716259 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a8139a33-a597-4038-9bb4-183e72f498b4","Type":"ContainerStarted","Data":"f045963e70da23fa859bf7a0a6d7963e8dbb9e83018d8e030eee264ed97fa82a"} Mar 09 16:41:04.716798 master-0 kubenswrapper[7604]: I0309 16:41:04.716779 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a8139a33-a597-4038-9bb4-183e72f498b4","Type":"ContainerStarted","Data":"61b142f4b016040c51452f30737a55d5afae72a9c5e2b5161cafa663238823b5"} Mar 09 16:41:04.729347 master-0 kubenswrapper[7604]: I0309 16:41:04.729272 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:04.729347 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:04.729347 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:04.729347 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:04.730239 master-0 kubenswrapper[7604]: I0309 16:41:04.729383 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:04.743957 master-0 kubenswrapper[7604]: I0309 16:41:04.743825 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.7437908640000002 podStartE2EDuration="1.743790864s" podCreationTimestamp="2026-03-09 16:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:41:04.737778294 +0000 UTC m=+921.791747727" watchObservedRunningTime="2026-03-09 16:41:04.743790864 +0000 UTC m=+921.797760297" Mar 09 16:41:05.728374 master-0 kubenswrapper[7604]: I0309 16:41:05.728300 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:05.728374 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:05.728374 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:05.728374 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:05.728374 master-0 kubenswrapper[7604]: I0309 16:41:05.728371 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:06.422191 master-0 kubenswrapper[7604]: E0309 16:41:06.422077 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:41:06.423944 master-0 kubenswrapper[7604]: E0309 16:41:06.423870 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:41:06.426314 master-0 kubenswrapper[7604]: E0309 16:41:06.426257 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:41:06.426475 master-0 kubenswrapper[7604]: E0309 16:41:06.426319 7604 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" Mar 09 16:41:06.729181 master-0 kubenswrapper[7604]: I0309 16:41:06.728890 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:06.729181 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:06.729181 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:06.729181 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:06.729181 master-0 kubenswrapper[7604]: I0309 16:41:06.729044 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:07.729806 master-0 kubenswrapper[7604]: I0309 16:41:07.729673 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:07.729806 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:07.729806 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:07.729806 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:07.729806 master-0 kubenswrapper[7604]: I0309 16:41:07.729818 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:08.727699 master-0 kubenswrapper[7604]: I0309 16:41:08.727597 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:08.727699 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:08.727699 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:08.727699 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:08.728387 master-0 kubenswrapper[7604]: I0309 16:41:08.727725 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:09.730036 master-0 kubenswrapper[7604]: I0309 16:41:09.729938 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:09.730036 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:09.730036 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:09.730036 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:09.731011 master-0 kubenswrapper[7604]: I0309 16:41:09.730066 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:10.728931 master-0 kubenswrapper[7604]: I0309 16:41:10.728838 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:10.728931 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:10.728931 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:10.728931 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:10.729471 master-0 kubenswrapper[7604]: I0309 16:41:10.728953 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:11.729500 master-0 kubenswrapper[7604]: I0309 16:41:11.729293 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:11.729500 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:11.729500 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:11.729500 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:11.729500 master-0 kubenswrapper[7604]: I0309 16:41:11.729406 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:12.729070 master-0 kubenswrapper[7604]: I0309 16:41:12.728973 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:12.729070 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:12.729070 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:12.729070 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:12.729480 master-0 kubenswrapper[7604]: I0309 16:41:12.729123 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:13.727392 master-0 kubenswrapper[7604]: I0309 16:41:13.727297 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:13.727392 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:13.727392 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:13.727392 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:13.728311 master-0 kubenswrapper[7604]: I0309 16:41:13.727414 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:14.729534 master-0 kubenswrapper[7604]: I0309 16:41:14.729446 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:14.729534 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:14.729534 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:14.729534 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:14.729534 master-0 kubenswrapper[7604]: I0309 16:41:14.729532 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:15.730840 master-0 kubenswrapper[7604]: I0309 16:41:15.730745 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:15.730840 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:15.730840 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:15.730840 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:15.731622 master-0 kubenswrapper[7604]: I0309 16:41:15.730874 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:16.422876 master-0 kubenswrapper[7604]: E0309 16:41:16.422729 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:41:16.425754 master-0 kubenswrapper[7604]: E0309 16:41:16.425572 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:41:16.429023 master-0 kubenswrapper[7604]: E0309 16:41:16.428923 7604 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 09 16:41:16.429351 master-0 kubenswrapper[7604]: E0309 16:41:16.429042 7604 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" Mar 09 16:41:16.728822 master-0 kubenswrapper[7604]: I0309 16:41:16.728723 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:16.728822 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:16.728822 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:16.728822 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:16.728822 master-0 kubenswrapper[7604]: I0309 16:41:16.728820 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:17.729724 master-0 kubenswrapper[7604]: I0309 16:41:17.729645 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:17.729724 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:17.729724 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:17.729724 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:17.730784 master-0 kubenswrapper[7604]: I0309 16:41:17.729750 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:18.728893 master-0 kubenswrapper[7604]: I0309 16:41:18.728786 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:18.728893 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:18.728893 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:18.728893 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:18.729397 master-0 kubenswrapper[7604]: I0309 16:41:18.728931 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:19.730878 master-0 kubenswrapper[7604]: I0309 16:41:19.730802 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:19.730878 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:19.730878 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:19.730878 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:19.731801 master-0 kubenswrapper[7604]: I0309 16:41:19.731699 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:20.729487 master-0 kubenswrapper[7604]: I0309 16:41:20.729346 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:20.729487 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:20.729487 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:20.729487 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:20.730291 master-0 kubenswrapper[7604]: I0309 16:41:20.729509 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:21.729871 master-0 kubenswrapper[7604]: I0309 16:41:21.729757 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:21.729871 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:21.729871 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:21.729871 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:21.730747 master-0 kubenswrapper[7604]: I0309 16:41:21.729896 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:22.626599 master-0 kubenswrapper[7604]: I0309 16:41:22.626529 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rwvsh_ec3f7633-1338-4282-bf19-5c0e1aa1b074/kube-multus-additional-cni-plugins/0.log" Mar 09 16:41:22.626599 master-0 kubenswrapper[7604]: I0309 16:41:22.626613 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:41:22.728620 master-0 kubenswrapper[7604]: I0309 16:41:22.728544 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:22.728620 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:22.728620 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:22.728620 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:22.729039 master-0 kubenswrapper[7604]: I0309 16:41:22.728632 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:22.729039 master-0 kubenswrapper[7604]: I0309 16:41:22.728659 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3f7633-1338-4282-bf19-5c0e1aa1b074-tuning-conf-dir\") pod \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " Mar 09 16:41:22.729039 master-0 kubenswrapper[7604]: I0309 16:41:22.728822 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghrgc\" (UniqueName: \"kubernetes.io/projected/ec3f7633-1338-4282-bf19-5c0e1aa1b074-kube-api-access-ghrgc\") pod \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " Mar 09 16:41:22.729039 master-0 kubenswrapper[7604]: I0309 16:41:22.728913 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec3f7633-1338-4282-bf19-5c0e1aa1b074-ready\") pod \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " Mar 09 16:41:22.729039 master-0 kubenswrapper[7604]: I0309 16:41:22.728967 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3f7633-1338-4282-bf19-5c0e1aa1b074-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "ec3f7633-1338-4282-bf19-5c0e1aa1b074" (UID: "ec3f7633-1338-4282-bf19-5c0e1aa1b074"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:41:22.729039 master-0 kubenswrapper[7604]: I0309 16:41:22.729022 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec3f7633-1338-4282-bf19-5c0e1aa1b074-cni-sysctl-allowlist\") pod \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\" (UID: \"ec3f7633-1338-4282-bf19-5c0e1aa1b074\") " Mar 09 16:41:22.729676 master-0 kubenswrapper[7604]: I0309 16:41:22.729634 7604 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec3f7633-1338-4282-bf19-5c0e1aa1b074-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:22.729739 master-0 kubenswrapper[7604]: I0309 16:41:22.729666 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3f7633-1338-4282-bf19-5c0e1aa1b074-ready" (OuterVolumeSpecName: "ready") pod "ec3f7633-1338-4282-bf19-5c0e1aa1b074" (UID: "ec3f7633-1338-4282-bf19-5c0e1aa1b074"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:41:22.729775 master-0 kubenswrapper[7604]: I0309 16:41:22.729687 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3f7633-1338-4282-bf19-5c0e1aa1b074-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "ec3f7633-1338-4282-bf19-5c0e1aa1b074" (UID: "ec3f7633-1338-4282-bf19-5c0e1aa1b074"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:41:22.732875 master-0 kubenswrapper[7604]: I0309 16:41:22.732817 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3f7633-1338-4282-bf19-5c0e1aa1b074-kube-api-access-ghrgc" (OuterVolumeSpecName: "kube-api-access-ghrgc") pod "ec3f7633-1338-4282-bf19-5c0e1aa1b074" (UID: "ec3f7633-1338-4282-bf19-5c0e1aa1b074"). InnerVolumeSpecName "kube-api-access-ghrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:41:22.831405 master-0 kubenswrapper[7604]: I0309 16:41:22.831180 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghrgc\" (UniqueName: \"kubernetes.io/projected/ec3f7633-1338-4282-bf19-5c0e1aa1b074-kube-api-access-ghrgc\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:22.831405 master-0 kubenswrapper[7604]: I0309 16:41:22.831249 7604 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ec3f7633-1338-4282-bf19-5c0e1aa1b074-ready\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:22.831405 master-0 kubenswrapper[7604]: I0309 16:41:22.831264 7604 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec3f7633-1338-4282-bf19-5c0e1aa1b074-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:22.863309 master-0 kubenswrapper[7604]: I0309 16:41:22.863242 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-rwvsh_ec3f7633-1338-4282-bf19-5c0e1aa1b074/kube-multus-additional-cni-plugins/0.log" Mar 09 16:41:22.863309 master-0 kubenswrapper[7604]: I0309 16:41:22.863309 7604 generic.go:334] "Generic (PLEG): container finished" podID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" exitCode=137 Mar 09 16:41:22.863786 master-0 kubenswrapper[7604]: I0309 16:41:22.863353 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" event={"ID":"ec3f7633-1338-4282-bf19-5c0e1aa1b074","Type":"ContainerDied","Data":"113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05"} Mar 09 16:41:22.863786 master-0 kubenswrapper[7604]: I0309 16:41:22.863397 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" event={"ID":"ec3f7633-1338-4282-bf19-5c0e1aa1b074","Type":"ContainerDied","Data":"6f2b5a87ea20750abab2c59a2b4adbec95c723b4adeec860456194857551aacb"} Mar 09 16:41:22.863786 master-0 kubenswrapper[7604]: I0309 16:41:22.863441 7604 scope.go:117] "RemoveContainer" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" Mar 09 16:41:22.863786 master-0 kubenswrapper[7604]: I0309 16:41:22.863609 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-rwvsh" Mar 09 16:41:22.884476 master-0 kubenswrapper[7604]: I0309 16:41:22.884350 7604 scope.go:117] "RemoveContainer" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" Mar 09 16:41:22.886140 master-0 kubenswrapper[7604]: E0309 16:41:22.886097 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05\": container with ID starting with 113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05 not found: ID does not exist" containerID="113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05" Mar 09 16:41:22.886297 master-0 kubenswrapper[7604]: I0309 16:41:22.886147 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05"} err="failed to get container status \"113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05\": rpc error: code = NotFound desc = could not find container \"113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05\": container with ID starting with 113f0ef649c9256343e42637a38d061368005717fc0582aeb029a0bc2c12fe05 not found: ID does not exist" Mar 09 16:41:22.909977 master-0 kubenswrapper[7604]: I0309 16:41:22.909860 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rwvsh"] Mar 09 16:41:22.916594 master-0 kubenswrapper[7604]: I0309 16:41:22.916464 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-rwvsh"] Mar 09 16:41:23.124883 master-0 kubenswrapper[7604]: I0309 16:41:23.124703 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" path="/var/lib/kubelet/pods/ec3f7633-1338-4282-bf19-5c0e1aa1b074/volumes" Mar 09 16:41:23.727950 master-0 kubenswrapper[7604]: I0309 16:41:23.727871 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:23.727950 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:23.727950 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:23.727950 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:23.727950 master-0 kubenswrapper[7604]: I0309 16:41:23.727931 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:24.728992 master-0 kubenswrapper[7604]: I0309 16:41:24.728919 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:24.728992 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:24.728992 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:24.728992 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:24.729957 master-0 kubenswrapper[7604]: I0309 16:41:24.729008 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:25.728284 master-0 kubenswrapper[7604]: I0309 16:41:25.728188 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:25.728284 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:25.728284 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:25.728284 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:25.728776 master-0 kubenswrapper[7604]: I0309 16:41:25.728290 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:26.729694 master-0 kubenswrapper[7604]: I0309 16:41:26.729573 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:26.729694 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:26.729694 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:26.729694 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:26.730716 master-0 kubenswrapper[7604]: I0309 16:41:26.729709 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:27.729330 master-0 kubenswrapper[7604]: I0309 16:41:27.729259 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:27.729330 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:27.729330 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:27.729330 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:27.729789 master-0 kubenswrapper[7604]: I0309 16:41:27.729344 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:28.728102 master-0 kubenswrapper[7604]: I0309 16:41:28.728008 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:28.728102 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:28.728102 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:28.728102 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:28.728541 master-0 kubenswrapper[7604]: I0309 16:41:28.728108 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:29.729382 master-0 kubenswrapper[7604]: I0309 16:41:29.729306 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:29.729382 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:29.729382 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:29.729382 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:29.729382 master-0 kubenswrapper[7604]: I0309 16:41:29.729404 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:30.729315 master-0 kubenswrapper[7604]: I0309 16:41:30.729072 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:30.729315 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:30.729315 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:30.729315 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:30.729315 master-0 kubenswrapper[7604]: I0309 16:41:30.729179 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:31.727772 master-0 kubenswrapper[7604]: I0309 16:41:31.727690 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:31.727772 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:31.727772 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:31.727772 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:31.728207 master-0 kubenswrapper[7604]: I0309 16:41:31.727785 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:32.729512 master-0 kubenswrapper[7604]: I0309 16:41:32.729386 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:32.729512 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:32.729512 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:32.729512 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:32.730618 master-0 kubenswrapper[7604]: I0309 16:41:32.729531 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:33.728115 master-0 kubenswrapper[7604]: I0309 16:41:33.728049 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:33.728115 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:33.728115 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:33.728115 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:33.728634 master-0 kubenswrapper[7604]: I0309 16:41:33.728128 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:34.729405 master-0 kubenswrapper[7604]: I0309 16:41:34.729331 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:34.729405 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:34.729405 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:34.729405 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:34.730266 master-0 kubenswrapper[7604]: I0309 16:41:34.729530 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:35.729383 master-0 kubenswrapper[7604]: I0309 16:41:35.729292 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:35.729383 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:35.729383 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:35.729383 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:35.730294 master-0 kubenswrapper[7604]: I0309 16:41:35.729412 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:36.729483 master-0 kubenswrapper[7604]: I0309 16:41:36.729320 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:36.729483 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:36.729483 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:36.729483 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:36.729483 master-0 kubenswrapper[7604]: I0309 16:41:36.729451 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:37.034678 master-0 kubenswrapper[7604]: I0309 16:41:37.034459 7604 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:41:37.034979 master-0 kubenswrapper[7604]: I0309 16:41:37.034931 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager" containerID="cri-o://5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" gracePeriod=30 Mar 09 16:41:37.035193 master-0 kubenswrapper[7604]: I0309 16:41:37.035072 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="cluster-policy-controller" containerID="cri-o://a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" gracePeriod=30 Mar 09 16:41:37.035302 master-0 kubenswrapper[7604]: I0309 16:41:37.035150 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" gracePeriod=30 Mar 09 16:41:37.035364 master-0 kubenswrapper[7604]: I0309 16:41:37.035072 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" gracePeriod=30 Mar 09 16:41:37.035671 master-0 kubenswrapper[7604]: I0309 16:41:37.035600 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: E0309 16:41:37.036019 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036041 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: E0309 16:41:37.036059 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-recovery-controller" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036066 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-recovery-controller" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: E0309 16:41:37.036081 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036089 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: E0309 16:41:37.036123 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="cluster-policy-controller" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036131 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="cluster-policy-controller" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: E0309 16:41:37.036152 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-cert-syncer" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036160 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-cert-syncer" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036300 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-recovery-controller" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036335 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3f7633-1338-4282-bf19-5c0e1aa1b074" containerName="kube-multus-additional-cni-plugins" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036354 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036369 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="kube-controller-manager-cert-syncer" Mar 09 16:41:37.036446 master-0 kubenswrapper[7604]: I0309 16:41:37.036379 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e5546da7de03c762cdb76021b225c2b" containerName="cluster-policy-controller" Mar 09 16:41:37.062951 master-0 kubenswrapper[7604]: I0309 16:41:37.062845 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.062951 master-0 kubenswrapper[7604]: I0309 16:41:37.062948 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.164531 master-0 kubenswrapper[7604]: I0309 16:41:37.164450 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.164531 master-0 kubenswrapper[7604]: I0309 16:41:37.164524 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.164904 master-0 kubenswrapper[7604]: I0309 16:41:37.164705 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.164904 master-0 kubenswrapper[7604]: I0309 16:41:37.164802 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.445060 master-0 kubenswrapper[7604]: I0309 16:41:37.444995 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4e5546da7de03c762cdb76021b225c2b/kube-controller-manager-cert-syncer/0.log" Mar 09 16:41:37.446516 master-0 kubenswrapper[7604]: I0309 16:41:37.446073 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.508028 master-0 kubenswrapper[7604]: I0309 16:41:37.507843 7604 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="4e5546da7de03c762cdb76021b225c2b" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" Mar 09 16:41:37.570149 master-0 kubenswrapper[7604]: I0309 16:41:37.570069 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-resource-dir\") pod \"4e5546da7de03c762cdb76021b225c2b\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " Mar 09 16:41:37.570654 master-0 kubenswrapper[7604]: I0309 16:41:37.570247 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-cert-dir\") pod \"4e5546da7de03c762cdb76021b225c2b\" (UID: \"4e5546da7de03c762cdb76021b225c2b\") " Mar 09 16:41:37.570654 master-0 kubenswrapper[7604]: I0309 16:41:37.570271 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4e5546da7de03c762cdb76021b225c2b" (UID: "4e5546da7de03c762cdb76021b225c2b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:41:37.570654 master-0 kubenswrapper[7604]: I0309 16:41:37.570593 7604 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:37.570779 master-0 kubenswrapper[7604]: I0309 16:41:37.570578 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4e5546da7de03c762cdb76021b225c2b" (UID: "4e5546da7de03c762cdb76021b225c2b"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:41:37.672292 master-0 kubenswrapper[7604]: I0309 16:41:37.672074 7604 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e5546da7de03c762cdb76021b225c2b-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:37.728530 master-0 kubenswrapper[7604]: I0309 16:41:37.728454 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:37.728530 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:37.728530 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:37.728530 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:37.728930 master-0 kubenswrapper[7604]: I0309 16:41:37.728545 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:37.993816 master-0 kubenswrapper[7604]: I0309 16:41:37.993738 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4e5546da7de03c762cdb76021b225c2b/kube-controller-manager-cert-syncer/0.log" Mar 09 16:41:37.995009 master-0 kubenswrapper[7604]: I0309 16:41:37.994932 7604 generic.go:334] "Generic (PLEG): container finished" podID="4e5546da7de03c762cdb76021b225c2b" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" exitCode=0 Mar 09 16:41:37.995009 master-0 kubenswrapper[7604]: I0309 16:41:37.994990 7604 generic.go:334] "Generic (PLEG): container finished" podID="4e5546da7de03c762cdb76021b225c2b" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" exitCode=2 Mar 09 16:41:37.995009 master-0 kubenswrapper[7604]: I0309 16:41:37.995001 7604 generic.go:334] "Generic (PLEG): container finished" podID="4e5546da7de03c762cdb76021b225c2b" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" exitCode=0 Mar 09 16:41:37.995009 master-0 kubenswrapper[7604]: I0309 16:41:37.995014 7604 generic.go:334] "Generic (PLEG): container finished" podID="4e5546da7de03c762cdb76021b225c2b" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" exitCode=0 Mar 09 16:41:37.995235 master-0 kubenswrapper[7604]: I0309 16:41:37.995039 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:37.995235 master-0 kubenswrapper[7604]: I0309 16:41:37.995079 7604 scope.go:117] "RemoveContainer" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" Mar 09 16:41:37.997455 master-0 kubenswrapper[7604]: I0309 16:41:37.997366 7604 generic.go:334] "Generic (PLEG): container finished" podID="a8139a33-a597-4038-9bb4-183e72f498b4" containerID="f045963e70da23fa859bf7a0a6d7963e8dbb9e83018d8e030eee264ed97fa82a" exitCode=0 Mar 09 16:41:37.997547 master-0 kubenswrapper[7604]: I0309 16:41:37.997455 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a8139a33-a597-4038-9bb4-183e72f498b4","Type":"ContainerDied","Data":"f045963e70da23fa859bf7a0a6d7963e8dbb9e83018d8e030eee264ed97fa82a"} Mar 09 16:41:37.998882 master-0 kubenswrapper[7604]: I0309 16:41:37.998823 7604 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="4e5546da7de03c762cdb76021b225c2b" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" Mar 09 16:41:38.021095 master-0 kubenswrapper[7604]: I0309 16:41:38.020775 7604 scope.go:117] "RemoveContainer" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" Mar 09 16:41:38.031128 master-0 kubenswrapper[7604]: I0309 16:41:38.030910 7604 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="4e5546da7de03c762cdb76021b225c2b" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" Mar 09 16:41:38.047045 master-0 kubenswrapper[7604]: I0309 16:41:38.046572 7604 scope.go:117] "RemoveContainer" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" Mar 09 16:41:38.066508 master-0 kubenswrapper[7604]: I0309 16:41:38.066401 7604 scope.go:117] "RemoveContainer" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" Mar 09 16:41:38.090068 master-0 kubenswrapper[7604]: I0309 16:41:38.089995 7604 scope.go:117] "RemoveContainer" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" Mar 09 16:41:38.090821 master-0 kubenswrapper[7604]: E0309 16:41:38.090710 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": container with ID starting with 98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1 not found: ID does not exist" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" Mar 09 16:41:38.090821 master-0 kubenswrapper[7604]: I0309 16:41:38.090770 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1"} err="failed to get container status \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": rpc error: code = NotFound desc = could not find container \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": container with ID starting with 98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1 not found: ID does not exist" Mar 09 16:41:38.090821 master-0 kubenswrapper[7604]: I0309 16:41:38.090815 7604 scope.go:117] "RemoveContainer" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" Mar 09 16:41:38.092042 master-0 kubenswrapper[7604]: E0309 16:41:38.091983 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": container with ID starting with c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519 not found: ID does not exist" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" Mar 09 16:41:38.092130 master-0 kubenswrapper[7604]: I0309 16:41:38.092061 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519"} err="failed to get container status \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": rpc error: code = NotFound desc = could not find container \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": container with ID starting with c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519 not found: ID does not exist" Mar 09 16:41:38.092130 master-0 kubenswrapper[7604]: I0309 16:41:38.092116 7604 scope.go:117] "RemoveContainer" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" Mar 09 16:41:38.092746 master-0 kubenswrapper[7604]: E0309 16:41:38.092701 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": container with ID starting with a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672 not found: ID does not exist" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" Mar 09 16:41:38.092836 master-0 kubenswrapper[7604]: I0309 16:41:38.092790 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672"} err="failed to get container status \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": rpc error: code = NotFound desc = could not find container \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": container with ID starting with a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672 not found: ID does not exist" Mar 09 16:41:38.092878 master-0 kubenswrapper[7604]: I0309 16:41:38.092851 7604 scope.go:117] "RemoveContainer" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" Mar 09 16:41:38.093293 master-0 kubenswrapper[7604]: E0309 16:41:38.093268 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": container with ID starting with 5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540 not found: ID does not exist" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" Mar 09 16:41:38.093470 master-0 kubenswrapper[7604]: I0309 16:41:38.093292 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540"} err="failed to get container status \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": rpc error: code = NotFound desc = could not find container \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": container with ID starting with 5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540 not found: ID does not exist" Mar 09 16:41:38.093470 master-0 kubenswrapper[7604]: I0309 16:41:38.093329 7604 scope.go:117] "RemoveContainer" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" Mar 09 16:41:38.093886 master-0 kubenswrapper[7604]: I0309 16:41:38.093836 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1"} err="failed to get container status \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": rpc error: code = NotFound desc = could not find container \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": container with ID starting with 98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1 not found: ID does not exist" Mar 09 16:41:38.093957 master-0 kubenswrapper[7604]: I0309 16:41:38.093884 7604 scope.go:117] "RemoveContainer" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" Mar 09 16:41:38.094345 master-0 kubenswrapper[7604]: I0309 16:41:38.094311 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519"} err="failed to get container status \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": rpc error: code = NotFound desc = could not find container \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": container with ID starting with c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519 not found: ID does not exist" Mar 09 16:41:38.094345 master-0 kubenswrapper[7604]: I0309 16:41:38.094340 7604 scope.go:117] "RemoveContainer" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" Mar 09 16:41:38.094868 master-0 kubenswrapper[7604]: I0309 16:41:38.094830 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672"} err="failed to get container status \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": rpc error: code = NotFound desc = could not find container \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": container with ID starting with a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672 not found: ID does not exist" Mar 09 16:41:38.094951 master-0 kubenswrapper[7604]: I0309 16:41:38.094867 7604 scope.go:117] "RemoveContainer" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" Mar 09 16:41:38.095271 master-0 kubenswrapper[7604]: I0309 16:41:38.095242 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540"} err="failed to get container status \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": rpc error: code = NotFound desc = could not find container \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": container with ID starting with 5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540 not found: ID does not exist" Mar 09 16:41:38.095271 master-0 kubenswrapper[7604]: I0309 16:41:38.095267 7604 scope.go:117] "RemoveContainer" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" Mar 09 16:41:38.095928 master-0 kubenswrapper[7604]: I0309 16:41:38.095733 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1"} err="failed to get container status \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": rpc error: code = NotFound desc = could not find container \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": container with ID starting with 98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1 not found: ID does not exist" Mar 09 16:41:38.095928 master-0 kubenswrapper[7604]: I0309 16:41:38.095763 7604 scope.go:117] "RemoveContainer" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" Mar 09 16:41:38.097623 master-0 kubenswrapper[7604]: I0309 16:41:38.097571 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519"} err="failed to get container status \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": rpc error: code = NotFound desc = could not find container \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": container with ID starting with c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519 not found: ID does not exist" Mar 09 16:41:38.097623 master-0 kubenswrapper[7604]: I0309 16:41:38.097608 7604 scope.go:117] "RemoveContainer" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" Mar 09 16:41:38.099283 master-0 kubenswrapper[7604]: I0309 16:41:38.099071 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672"} err="failed to get container status \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": rpc error: code = NotFound desc = could not find container \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": container with ID starting with a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672 not found: ID does not exist" Mar 09 16:41:38.099283 master-0 kubenswrapper[7604]: I0309 16:41:38.099179 7604 scope.go:117] "RemoveContainer" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" Mar 09 16:41:38.100712 master-0 kubenswrapper[7604]: I0309 16:41:38.100646 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540"} err="failed to get container status \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": rpc error: code = NotFound desc = could not find container \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": container with ID starting with 5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540 not found: ID does not exist" Mar 09 16:41:38.100712 master-0 kubenswrapper[7604]: I0309 16:41:38.100679 7604 scope.go:117] "RemoveContainer" containerID="98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1" Mar 09 16:41:38.101183 master-0 kubenswrapper[7604]: I0309 16:41:38.101111 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1"} err="failed to get container status \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": rpc error: code = NotFound desc = could not find container \"98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1\": container with ID starting with 98902bb14bc8065fbd2cde76f6fa32ddb8e25379e54b550d7b98d115476badc1 not found: ID does not exist" Mar 09 16:41:38.101249 master-0 kubenswrapper[7604]: I0309 16:41:38.101183 7604 scope.go:117] "RemoveContainer" containerID="c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519" Mar 09 16:41:38.101953 master-0 kubenswrapper[7604]: I0309 16:41:38.101794 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519"} err="failed to get container status \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": rpc error: code = NotFound desc = could not find container \"c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519\": container with ID starting with c6430279eec8ed82a45b918acd23b175790f02bac3ca3d61be7b5a32f93a9519 not found: ID does not exist" Mar 09 16:41:38.101953 master-0 kubenswrapper[7604]: I0309 16:41:38.101886 7604 scope.go:117] "RemoveContainer" containerID="a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672" Mar 09 16:41:38.102394 master-0 kubenswrapper[7604]: I0309 16:41:38.102355 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672"} err="failed to get container status \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": rpc error: code = NotFound desc = could not find container \"a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672\": container with ID starting with a7d51d4e56d328604130d739db483eb6cb10afdef610524f7da98c0c33f7f672 not found: ID does not exist" Mar 09 16:41:38.102394 master-0 kubenswrapper[7604]: I0309 16:41:38.102391 7604 scope.go:117] "RemoveContainer" containerID="5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540" Mar 09 16:41:38.102747 master-0 kubenswrapper[7604]: I0309 16:41:38.102715 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540"} err="failed to get container status \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": rpc error: code = NotFound desc = could not find container \"5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540\": container with ID starting with 5ba3a8b7c226b72ea81224098bff48ab385189b54031c864ec5ea262ccd93540 not found: ID does not exist" Mar 09 16:41:38.728657 master-0 kubenswrapper[7604]: I0309 16:41:38.728572 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:38.728657 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:38.728657 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:38.728657 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:38.729172 master-0 kubenswrapper[7604]: I0309 16:41:38.728678 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:39.122163 master-0 kubenswrapper[7604]: I0309 16:41:39.122060 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e5546da7de03c762cdb76021b225c2b" path="/var/lib/kubelet/pods/4e5546da7de03c762cdb76021b225c2b/volumes" Mar 09 16:41:39.305823 master-0 kubenswrapper[7604]: I0309 16:41:39.305740 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:39.503708 master-0 kubenswrapper[7604]: I0309 16:41:39.503600 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-var-lock\") pod \"a8139a33-a597-4038-9bb4-183e72f498b4\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " Mar 09 16:41:39.504108 master-0 kubenswrapper[7604]: I0309 16:41:39.503766 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8139a33-a597-4038-9bb4-183e72f498b4-kube-api-access\") pod \"a8139a33-a597-4038-9bb4-183e72f498b4\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " Mar 09 16:41:39.504108 master-0 kubenswrapper[7604]: I0309 16:41:39.503802 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-var-lock" (OuterVolumeSpecName: "var-lock") pod "a8139a33-a597-4038-9bb4-183e72f498b4" (UID: "a8139a33-a597-4038-9bb4-183e72f498b4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:41:39.504108 master-0 kubenswrapper[7604]: I0309 16:41:39.503950 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-kubelet-dir\") pod \"a8139a33-a597-4038-9bb4-183e72f498b4\" (UID: \"a8139a33-a597-4038-9bb4-183e72f498b4\") " Mar 09 16:41:39.504108 master-0 kubenswrapper[7604]: I0309 16:41:39.504054 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8139a33-a597-4038-9bb4-183e72f498b4" (UID: "a8139a33-a597-4038-9bb4-183e72f498b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:41:39.505626 master-0 kubenswrapper[7604]: I0309 16:41:39.505567 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:39.505626 master-0 kubenswrapper[7604]: I0309 16:41:39.505626 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8139a33-a597-4038-9bb4-183e72f498b4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:39.508527 master-0 kubenswrapper[7604]: I0309 16:41:39.508331 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8139a33-a597-4038-9bb4-183e72f498b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8139a33-a597-4038-9bb4-183e72f498b4" (UID: "a8139a33-a597-4038-9bb4-183e72f498b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:41:39.607576 master-0 kubenswrapper[7604]: I0309 16:41:39.607470 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8139a33-a597-4038-9bb4-183e72f498b4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:41:39.729152 master-0 kubenswrapper[7604]: I0309 16:41:39.729075 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:39.729152 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:39.729152 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:39.729152 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:39.729659 master-0 kubenswrapper[7604]: I0309 16:41:39.729176 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:40.034575 master-0 kubenswrapper[7604]: I0309 16:41:40.034472 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"a8139a33-a597-4038-9bb4-183e72f498b4","Type":"ContainerDied","Data":"61b142f4b016040c51452f30737a55d5afae72a9c5e2b5161cafa663238823b5"} Mar 09 16:41:40.034575 master-0 kubenswrapper[7604]: I0309 16:41:40.034525 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:41:40.034575 master-0 kubenswrapper[7604]: I0309 16:41:40.034566 7604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b142f4b016040c51452f30737a55d5afae72a9c5e2b5161cafa663238823b5" Mar 09 16:41:40.036283 master-0 kubenswrapper[7604]: I0309 16:41:40.036252 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/4.log" Mar 09 16:41:40.036677 master-0 kubenswrapper[7604]: I0309 16:41:40.036655 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/3.log" Mar 09 16:41:40.037189 master-0 kubenswrapper[7604]: I0309 16:41:40.037076 7604 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" exitCode=1 Mar 09 16:41:40.037189 master-0 kubenswrapper[7604]: I0309 16:41:40.037117 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerDied","Data":"656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0"} Mar 09 16:41:40.037189 master-0 kubenswrapper[7604]: I0309 16:41:40.037164 7604 scope.go:117] "RemoveContainer" containerID="ed515bdfd83c606cb113b7024889d302992f35c1871e1a20fb245f7263736ff0" Mar 09 16:41:40.037700 master-0 kubenswrapper[7604]: I0309 16:41:40.037671 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:41:40.038744 master-0 kubenswrapper[7604]: E0309 16:41:40.038062 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:41:40.728914 master-0 kubenswrapper[7604]: I0309 16:41:40.728808 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:40.728914 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:40.728914 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:40.728914 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:40.729875 master-0 kubenswrapper[7604]: I0309 16:41:40.728919 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:41.045840 master-0 kubenswrapper[7604]: I0309 16:41:41.045672 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/4.log" Mar 09 16:41:41.728715 master-0 kubenswrapper[7604]: I0309 16:41:41.728622 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:41.728715 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:41.728715 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:41.728715 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:41.729548 master-0 kubenswrapper[7604]: I0309 16:41:41.728737 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:42.729491 master-0 kubenswrapper[7604]: I0309 16:41:42.729367 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:42.729491 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:42.729491 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:42.729491 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:42.729491 master-0 kubenswrapper[7604]: I0309 16:41:42.729481 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:43.728189 master-0 kubenswrapper[7604]: I0309 16:41:43.728056 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:43.728189 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:43.728189 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:43.728189 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:43.728189 master-0 kubenswrapper[7604]: I0309 16:41:43.728132 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:44.729323 master-0 kubenswrapper[7604]: I0309 16:41:44.729244 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:44.729323 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:44.729323 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:44.729323 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:44.729989 master-0 kubenswrapper[7604]: I0309 16:41:44.729351 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:45.729220 master-0 kubenswrapper[7604]: I0309 16:41:45.729120 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:45.729220 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:45.729220 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:45.729220 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:45.730043 master-0 kubenswrapper[7604]: I0309 16:41:45.729252 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:46.729065 master-0 kubenswrapper[7604]: I0309 16:41:46.728942 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:46.729065 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:46.729065 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:46.729065 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:46.729555 master-0 kubenswrapper[7604]: I0309 16:41:46.729077 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:47.728564 master-0 kubenswrapper[7604]: I0309 16:41:47.728361 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:47.728564 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:47.728564 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:47.728564 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:47.728564 master-0 kubenswrapper[7604]: I0309 16:41:47.728483 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:48.728990 master-0 kubenswrapper[7604]: I0309 16:41:48.728910 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:48.728990 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:48.728990 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:48.728990 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:48.730394 master-0 kubenswrapper[7604]: I0309 16:41:48.730348 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:49.728349 master-0 kubenswrapper[7604]: I0309 16:41:49.728272 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:49.728349 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:49.728349 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:49.728349 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:49.728870 master-0 kubenswrapper[7604]: I0309 16:41:49.728364 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:50.728888 master-0 kubenswrapper[7604]: I0309 16:41:50.728802 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:50.728888 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:50.728888 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:50.728888 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:50.728888 master-0 kubenswrapper[7604]: I0309 16:41:50.728898 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:51.728947 master-0 kubenswrapper[7604]: I0309 16:41:51.728865 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:51.728947 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:51.728947 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:51.728947 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:51.729870 master-0 kubenswrapper[7604]: I0309 16:41:51.728962 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:52.110408 master-0 kubenswrapper[7604]: I0309 16:41:52.110249 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:52.144637 master-0 kubenswrapper[7604]: I0309 16:41:52.144581 7604 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a4c9552d-d7f7-4e6a-9d82-ada4bc9359d3" Mar 09 16:41:52.144637 master-0 kubenswrapper[7604]: I0309 16:41:52.144629 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a4c9552d-d7f7-4e6a-9d82-ada4bc9359d3" Mar 09 16:41:52.181008 master-0 kubenswrapper[7604]: I0309 16:41:52.180947 7604 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:52.184033 master-0 kubenswrapper[7604]: I0309 16:41:52.183998 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:41:52.190131 master-0 kubenswrapper[7604]: I0309 16:41:52.190056 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:41:52.195414 master-0 kubenswrapper[7604]: I0309 16:41:52.195357 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:41:52.199406 master-0 kubenswrapper[7604]: I0309 16:41:52.199367 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:41:52.214404 master-0 kubenswrapper[7604]: W0309 16:41:52.214336 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee901e15ed65fb7aa5785ec8ec0563e.slice/crio-1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7 WatchSource:0}: Error finding container 1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7: Status 404 returned error can't find the container with id 1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7 Mar 09 16:41:52.730627 master-0 kubenswrapper[7604]: I0309 16:41:52.730554 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:52.730627 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:52.730627 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:52.730627 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:52.731545 master-0 kubenswrapper[7604]: I0309 16:41:52.730653 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:53.164943 master-0 kubenswrapper[7604]: I0309 16:41:53.164830 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"3c38b2115cd52d1efef54c2999128dc674a18b9803bfdcdba9d9e455d6aa049a"} Mar 09 16:41:53.164943 master-0 kubenswrapper[7604]: I0309 16:41:53.164935 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"4cd8903e8e22ba82f42ce990c7d672d208e9b2502ddb3553b9e1798f91e13ece"} Mar 09 16:41:53.164943 master-0 kubenswrapper[7604]: I0309 16:41:53.164953 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08"} Mar 09 16:41:53.165249 master-0 kubenswrapper[7604]: I0309 16:41:53.164971 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7"} Mar 09 16:41:53.728979 master-0 kubenswrapper[7604]: I0309 16:41:53.728860 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:53.728979 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:53.728979 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:53.728979 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:53.728979 master-0 kubenswrapper[7604]: I0309 16:41:53.728935 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:54.112095 master-0 kubenswrapper[7604]: I0309 16:41:54.112000 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:41:54.112705 master-0 kubenswrapper[7604]: E0309 16:41:54.112381 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:41:54.174296 master-0 kubenswrapper[7604]: I0309 16:41:54.174246 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"55b9dd03a97a7153346e305d1d756d1e7bf45a58d0547c62d3e8a40594f9dbaa"} Mar 09 16:41:54.201703 master-0 kubenswrapper[7604]: I0309 16:41:54.201618 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.201596812 podStartE2EDuration="2.201596812s" podCreationTimestamp="2026-03-09 16:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:41:54.199017862 +0000 UTC m=+971.252987285" watchObservedRunningTime="2026-03-09 16:41:54.201596812 +0000 UTC m=+971.255566235" Mar 09 16:41:54.728478 master-0 kubenswrapper[7604]: I0309 16:41:54.728376 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:54.728478 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:54.728478 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:54.728478 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:54.728796 master-0 kubenswrapper[7604]: I0309 16:41:54.728483 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:55.729176 master-0 kubenswrapper[7604]: I0309 16:41:55.729070 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:55.729176 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:55.729176 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:55.729176 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:55.729980 master-0 kubenswrapper[7604]: I0309 16:41:55.729180 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:56.728503 master-0 kubenswrapper[7604]: I0309 16:41:56.728402 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:56.728503 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:56.728503 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:56.728503 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:56.728903 master-0 kubenswrapper[7604]: I0309 16:41:56.728521 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:57.727940 master-0 kubenswrapper[7604]: I0309 16:41:57.727882 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:57.727940 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:57.727940 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:57.727940 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:57.728520 master-0 kubenswrapper[7604]: I0309 16:41:57.727951 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:58.728880 master-0 kubenswrapper[7604]: I0309 16:41:58.728783 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:58.728880 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:58.728880 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:58.728880 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:58.729561 master-0 kubenswrapper[7604]: I0309 16:41:58.728913 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:41:59.728883 master-0 kubenswrapper[7604]: I0309 16:41:59.728803 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:41:59.728883 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:41:59.728883 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:41:59.728883 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:41:59.728883 master-0 kubenswrapper[7604]: I0309 16:41:59.728873 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:00.728343 master-0 kubenswrapper[7604]: I0309 16:42:00.728264 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:00.728343 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:00.728343 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:00.728343 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:00.728343 master-0 kubenswrapper[7604]: I0309 16:42:00.728330 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:01.728648 master-0 kubenswrapper[7604]: I0309 16:42:01.728575 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:01.728648 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:01.728648 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:01.728648 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:01.729679 master-0 kubenswrapper[7604]: I0309 16:42:01.728688 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:02.195806 master-0 kubenswrapper[7604]: I0309 16:42:02.195752 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.196206 master-0 kubenswrapper[7604]: I0309 16:42:02.196191 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.196293 master-0 kubenswrapper[7604]: I0309 16:42:02.196282 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.196364 master-0 kubenswrapper[7604]: I0309 16:42:02.196355 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.200319 master-0 kubenswrapper[7604]: I0309 16:42:02.200262 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.200827 master-0 kubenswrapper[7604]: I0309 16:42:02.200772 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.233286 master-0 kubenswrapper[7604]: I0309 16:42:02.233237 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.234071 master-0 kubenswrapper[7604]: I0309 16:42:02.234036 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:42:02.729082 master-0 kubenswrapper[7604]: I0309 16:42:02.728997 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:02.729082 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:02.729082 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:02.729082 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:02.729741 master-0 kubenswrapper[7604]: I0309 16:42:02.729114 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:03.728042 master-0 kubenswrapper[7604]: I0309 16:42:03.727946 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:03.728042 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:03.728042 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:03.728042 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:03.728042 master-0 kubenswrapper[7604]: I0309 16:42:03.728032 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:04.727659 master-0 kubenswrapper[7604]: I0309 16:42:04.727573 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:04.727659 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:04.727659 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:04.727659 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:04.728317 master-0 kubenswrapper[7604]: I0309 16:42:04.727671 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:05.730019 master-0 kubenswrapper[7604]: I0309 16:42:05.729947 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:05.730019 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:05.730019 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:05.730019 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:05.730019 master-0 kubenswrapper[7604]: I0309 16:42:05.730034 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:06.728520 master-0 kubenswrapper[7604]: I0309 16:42:06.728461 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:06.728520 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:06.728520 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:06.728520 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:06.728863 master-0 kubenswrapper[7604]: I0309 16:42:06.728543 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:07.727932 master-0 kubenswrapper[7604]: I0309 16:42:07.727859 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:07.727932 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:07.727932 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:07.727932 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:07.727932 master-0 kubenswrapper[7604]: I0309 16:42:07.727934 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:08.112028 master-0 kubenswrapper[7604]: I0309 16:42:08.111845 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:42:08.112225 master-0 kubenswrapper[7604]: E0309 16:42:08.112167 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:42:08.728913 master-0 kubenswrapper[7604]: I0309 16:42:08.728839 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:08.728913 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:08.728913 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:08.728913 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:08.730322 master-0 kubenswrapper[7604]: I0309 16:42:08.728943 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:09.728739 master-0 kubenswrapper[7604]: I0309 16:42:09.728674 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:09.728739 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:09.728739 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:09.728739 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:09.729385 master-0 kubenswrapper[7604]: I0309 16:42:09.728761 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:10.727708 master-0 kubenswrapper[7604]: I0309 16:42:10.727643 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:10.727708 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:10.727708 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:10.727708 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:10.727708 master-0 kubenswrapper[7604]: I0309 16:42:10.727712 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:11.728838 master-0 kubenswrapper[7604]: I0309 16:42:11.728771 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:11.728838 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:11.728838 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:11.728838 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:11.729592 master-0 kubenswrapper[7604]: I0309 16:42:11.728884 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:12.728527 master-0 kubenswrapper[7604]: I0309 16:42:12.728449 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:12.728527 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:12.728527 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:12.728527 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:12.729703 master-0 kubenswrapper[7604]: I0309 16:42:12.729646 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:13.729203 master-0 kubenswrapper[7604]: I0309 16:42:13.729130 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:13.729203 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:13.729203 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:13.729203 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:13.729203 master-0 kubenswrapper[7604]: I0309 16:42:13.729206 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:14.729029 master-0 kubenswrapper[7604]: I0309 16:42:14.728882 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:14.729029 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:14.729029 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:14.729029 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:14.729880 master-0 kubenswrapper[7604]: I0309 16:42:14.729040 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:15.728728 master-0 kubenswrapper[7604]: I0309 16:42:15.728635 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:15.728728 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:15.728728 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:15.728728 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:15.729108 master-0 kubenswrapper[7604]: I0309 16:42:15.728753 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:16.727508 master-0 kubenswrapper[7604]: I0309 16:42:16.727393 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:16.727508 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:16.727508 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:16.727508 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:16.728358 master-0 kubenswrapper[7604]: I0309 16:42:16.727537 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:17.727931 master-0 kubenswrapper[7604]: I0309 16:42:17.727857 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:17.727931 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:17.727931 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:17.727931 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:17.727931 master-0 kubenswrapper[7604]: I0309 16:42:17.727927 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:18.729623 master-0 kubenswrapper[7604]: I0309 16:42:18.729539 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:18.729623 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:18.729623 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:18.729623 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:18.729623 master-0 kubenswrapper[7604]: I0309 16:42:18.729634 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:19.727869 master-0 kubenswrapper[7604]: I0309 16:42:19.727797 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:19.727869 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:19.727869 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:19.727869 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:19.728341 master-0 kubenswrapper[7604]: I0309 16:42:19.727892 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:20.111363 master-0 kubenswrapper[7604]: I0309 16:42:20.111236 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:42:20.111970 master-0 kubenswrapper[7604]: E0309 16:42:20.111655 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:42:20.729142 master-0 kubenswrapper[7604]: I0309 16:42:20.729074 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:20.729142 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:20.729142 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:20.729142 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:20.729486 master-0 kubenswrapper[7604]: I0309 16:42:20.729164 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:21.728856 master-0 kubenswrapper[7604]: I0309 16:42:21.728746 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:21.728856 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:21.728856 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:21.728856 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:21.728856 master-0 kubenswrapper[7604]: I0309 16:42:21.728838 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:22.728056 master-0 kubenswrapper[7604]: I0309 16:42:22.727773 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:22.728056 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:22.728056 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:22.728056 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:22.728056 master-0 kubenswrapper[7604]: I0309 16:42:22.727879 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:23.728146 master-0 kubenswrapper[7604]: I0309 16:42:23.728102 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:23.728146 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:23.728146 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:23.728146 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:23.728873 master-0 kubenswrapper[7604]: I0309 16:42:23.728173 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:24.728929 master-0 kubenswrapper[7604]: I0309 16:42:24.728835 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:24.728929 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:24.728929 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:24.728929 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:24.729793 master-0 kubenswrapper[7604]: I0309 16:42:24.728941 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:25.728663 master-0 kubenswrapper[7604]: I0309 16:42:25.728355 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:25.728663 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:25.728663 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:25.728663 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:25.729410 master-0 kubenswrapper[7604]: I0309 16:42:25.728700 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:26.729094 master-0 kubenswrapper[7604]: I0309 16:42:26.728891 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:26.729094 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:26.729094 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:26.729094 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:26.729094 master-0 kubenswrapper[7604]: I0309 16:42:26.729004 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:27.729861 master-0 kubenswrapper[7604]: I0309 16:42:27.729779 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:27.729861 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:27.729861 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:27.729861 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:27.729861 master-0 kubenswrapper[7604]: I0309 16:42:27.729866 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:28.727929 master-0 kubenswrapper[7604]: I0309 16:42:28.727842 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:28.727929 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:28.727929 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:28.727929 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:28.728386 master-0 kubenswrapper[7604]: I0309 16:42:28.727934 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:29.729157 master-0 kubenswrapper[7604]: I0309 16:42:29.729081 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:29.729157 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:29.729157 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:29.729157 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:29.729968 master-0 kubenswrapper[7604]: I0309 16:42:29.729166 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:30.728444 master-0 kubenswrapper[7604]: I0309 16:42:30.728368 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:30.728444 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:30.728444 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:30.728444 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:30.728760 master-0 kubenswrapper[7604]: I0309 16:42:30.728468 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:31.727676 master-0 kubenswrapper[7604]: I0309 16:42:31.727629 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:31.727676 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:31.727676 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:31.727676 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:31.728366 master-0 kubenswrapper[7604]: I0309 16:42:31.727692 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:32.727793 master-0 kubenswrapper[7604]: I0309 16:42:32.727753 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:42:32.727793 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:42:32.727793 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:42:32.727793 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:42:32.728492 master-0 kubenswrapper[7604]: I0309 16:42:32.728461 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:42:32.728609 master-0 kubenswrapper[7604]: I0309 16:42:32.728596 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:42:32.729901 master-0 kubenswrapper[7604]: I0309 16:42:32.729837 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0a6dcd96dc0badcacb59d76f3cf7625d66b40a5e5d0f154a56f4d766f6cd06e0"} pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" containerMessage="Container router failed startup probe, will be restarted" Mar 09 16:42:32.729993 master-0 kubenswrapper[7604]: I0309 16:42:32.729923 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" containerID="cri-o://0a6dcd96dc0badcacb59d76f3cf7625d66b40a5e5d0f154a56f4d766f6cd06e0" gracePeriod=3600 Mar 09 16:42:33.114712 master-0 kubenswrapper[7604]: I0309 16:42:33.114581 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:42:33.114909 master-0 kubenswrapper[7604]: E0309 16:42:33.114809 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:42:47.111097 master-0 kubenswrapper[7604]: I0309 16:42:47.111045 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:42:47.111818 master-0 kubenswrapper[7604]: E0309 16:42:47.111332 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:42:58.111853 master-0 kubenswrapper[7604]: I0309 16:42:58.111787 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:42:58.112458 master-0 kubenswrapper[7604]: E0309 16:42:58.112065 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:43:12.111441 master-0 kubenswrapper[7604]: I0309 16:43:12.111374 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:43:12.712340 master-0 kubenswrapper[7604]: I0309 16:43:12.712263 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/4.log" Mar 09 16:43:12.712813 master-0 kubenswrapper[7604]: I0309 16:43:12.712771 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023"} Mar 09 16:43:19.767945 master-0 kubenswrapper[7604]: I0309 16:43:19.767892 7604 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="0a6dcd96dc0badcacb59d76f3cf7625d66b40a5e5d0f154a56f4d766f6cd06e0" exitCode=0 Mar 09 16:43:19.768991 master-0 kubenswrapper[7604]: I0309 16:43:19.767948 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerDied","Data":"0a6dcd96dc0badcacb59d76f3cf7625d66b40a5e5d0f154a56f4d766f6cd06e0"} Mar 09 16:43:19.768991 master-0 kubenswrapper[7604]: I0309 16:43:19.767992 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"34d69e01c0df1a8808ea1e61ee678a2f4eb359f9a66a8c80ee688b834fc7aa8b"} Mar 09 16:43:19.768991 master-0 kubenswrapper[7604]: I0309 16:43:19.768016 7604 scope.go:117] "RemoveContainer" containerID="4c1869d3a7ddcc58f5543caea28428d5b999484c891498c9388eaad6a5d85b10" Mar 09 16:43:20.726577 master-0 kubenswrapper[7604]: I0309 16:43:20.726476 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:43:20.729724 master-0 kubenswrapper[7604]: I0309 16:43:20.729629 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:20.729724 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:20.729724 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:20.729724 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:20.730080 master-0 kubenswrapper[7604]: I0309 16:43:20.729731 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:21.728554 master-0 kubenswrapper[7604]: I0309 16:43:21.728464 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:21.728554 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:21.728554 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:21.728554 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:21.728554 master-0 kubenswrapper[7604]: I0309 16:43:21.728547 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:22.727979 master-0 kubenswrapper[7604]: I0309 16:43:22.727908 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:22.727979 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:22.727979 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:22.727979 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:22.728515 master-0 kubenswrapper[7604]: I0309 16:43:22.728001 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:23.727546 master-0 kubenswrapper[7604]: I0309 16:43:23.727478 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:23.727546 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:23.727546 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:23.727546 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:23.728133 master-0 kubenswrapper[7604]: I0309 16:43:23.727553 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:24.728372 master-0 kubenswrapper[7604]: I0309 16:43:24.728291 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:24.728372 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:24.728372 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:24.728372 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:24.729272 master-0 kubenswrapper[7604]: I0309 16:43:24.728370 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:25.727358 master-0 kubenswrapper[7604]: I0309 16:43:25.727292 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:25.727358 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:25.727358 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:25.727358 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:25.727358 master-0 kubenswrapper[7604]: I0309 16:43:25.727349 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:26.728815 master-0 kubenswrapper[7604]: I0309 16:43:26.728693 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:26.728815 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:26.728815 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:26.728815 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:26.729940 master-0 kubenswrapper[7604]: I0309 16:43:26.728828 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:27.728867 master-0 kubenswrapper[7604]: I0309 16:43:27.728792 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:27.728867 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:27.728867 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:27.728867 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:27.729530 master-0 kubenswrapper[7604]: I0309 16:43:27.728900 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:28.725561 master-0 kubenswrapper[7604]: I0309 16:43:28.725465 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:43:28.727971 master-0 kubenswrapper[7604]: I0309 16:43:28.727906 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:28.727971 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:28.727971 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:28.727971 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:28.728156 master-0 kubenswrapper[7604]: I0309 16:43:28.728017 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:29.728959 master-0 kubenswrapper[7604]: I0309 16:43:29.728891 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:29.728959 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:29.728959 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:29.728959 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:29.731117 master-0 kubenswrapper[7604]: I0309 16:43:29.728965 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:30.728781 master-0 kubenswrapper[7604]: I0309 16:43:30.728695 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:30.728781 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:30.728781 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:30.728781 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:30.728781 master-0 kubenswrapper[7604]: I0309 16:43:30.728782 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:31.727844 master-0 kubenswrapper[7604]: I0309 16:43:31.727760 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:31.727844 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:31.727844 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:31.727844 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:31.728253 master-0 kubenswrapper[7604]: I0309 16:43:31.727865 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:32.728063 master-0 kubenswrapper[7604]: I0309 16:43:32.727985 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:32.728063 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:32.728063 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:32.728063 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:32.728838 master-0 kubenswrapper[7604]: I0309 16:43:32.728071 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:33.729013 master-0 kubenswrapper[7604]: I0309 16:43:33.728919 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:33.729013 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:33.729013 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:33.729013 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:33.729777 master-0 kubenswrapper[7604]: I0309 16:43:33.729038 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:34.728220 master-0 kubenswrapper[7604]: I0309 16:43:34.728143 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:34.728220 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:34.728220 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:34.728220 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:34.728520 master-0 kubenswrapper[7604]: I0309 16:43:34.728219 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:35.727657 master-0 kubenswrapper[7604]: I0309 16:43:35.727544 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:35.727657 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:35.727657 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:35.727657 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:35.728548 master-0 kubenswrapper[7604]: I0309 16:43:35.727676 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:36.728131 master-0 kubenswrapper[7604]: I0309 16:43:36.728049 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:36.728131 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:36.728131 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:36.728131 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:36.728131 master-0 kubenswrapper[7604]: I0309 16:43:36.728136 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:37.728974 master-0 kubenswrapper[7604]: I0309 16:43:37.728918 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:37.728974 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:37.728974 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:37.728974 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:37.729701 master-0 kubenswrapper[7604]: I0309 16:43:37.728995 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:38.729148 master-0 kubenswrapper[7604]: I0309 16:43:38.729038 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:38.729148 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:38.729148 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:38.729148 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:38.730173 master-0 kubenswrapper[7604]: I0309 16:43:38.729151 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:39.728978 master-0 kubenswrapper[7604]: I0309 16:43:39.728886 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:39.728978 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:39.728978 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:39.728978 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:39.729799 master-0 kubenswrapper[7604]: I0309 16:43:39.728995 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:40.729320 master-0 kubenswrapper[7604]: I0309 16:43:40.729248 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:40.729320 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:40.729320 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:40.729320 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:40.730231 master-0 kubenswrapper[7604]: I0309 16:43:40.730188 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:41.727938 master-0 kubenswrapper[7604]: I0309 16:43:41.727876 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:41.727938 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:41.727938 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:41.727938 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:41.728409 master-0 kubenswrapper[7604]: I0309 16:43:41.728380 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:42.728111 master-0 kubenswrapper[7604]: I0309 16:43:42.728023 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:42.728111 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:42.728111 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:42.728111 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:42.728887 master-0 kubenswrapper[7604]: I0309 16:43:42.728151 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:43.727378 master-0 kubenswrapper[7604]: I0309 16:43:43.727239 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:43.727378 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:43.727378 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:43.727378 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:43.727378 master-0 kubenswrapper[7604]: I0309 16:43:43.727341 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:44.728716 master-0 kubenswrapper[7604]: I0309 16:43:44.728650 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:44.728716 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:44.728716 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:44.728716 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:44.729286 master-0 kubenswrapper[7604]: I0309 16:43:44.728729 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:45.728195 master-0 kubenswrapper[7604]: I0309 16:43:45.728126 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:45.728195 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:45.728195 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:45.728195 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:45.728592 master-0 kubenswrapper[7604]: I0309 16:43:45.728223 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:46.728320 master-0 kubenswrapper[7604]: I0309 16:43:46.728236 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:46.728320 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:46.728320 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:46.728320 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:46.729125 master-0 kubenswrapper[7604]: I0309 16:43:46.728332 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:47.728379 master-0 kubenswrapper[7604]: I0309 16:43:47.728306 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:47.728379 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:47.728379 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:47.728379 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:47.729032 master-0 kubenswrapper[7604]: I0309 16:43:47.728405 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:48.728733 master-0 kubenswrapper[7604]: I0309 16:43:48.728645 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:48.728733 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:48.728733 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:48.728733 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:48.729907 master-0 kubenswrapper[7604]: I0309 16:43:48.728757 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:49.728144 master-0 kubenswrapper[7604]: I0309 16:43:49.728057 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:49.728144 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:49.728144 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:49.728144 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:49.729168 master-0 kubenswrapper[7604]: I0309 16:43:49.728150 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:50.728456 master-0 kubenswrapper[7604]: I0309 16:43:50.728338 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:50.728456 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:50.728456 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:50.728456 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:50.729360 master-0 kubenswrapper[7604]: I0309 16:43:50.729313 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:51.728310 master-0 kubenswrapper[7604]: I0309 16:43:51.728202 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:51.728310 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:51.728310 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:51.728310 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:51.729209 master-0 kubenswrapper[7604]: I0309 16:43:51.728321 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:52.727726 master-0 kubenswrapper[7604]: I0309 16:43:52.727666 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:52.727726 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:52.727726 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:52.727726 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:52.728030 master-0 kubenswrapper[7604]: I0309 16:43:52.727741 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:53.728513 master-0 kubenswrapper[7604]: I0309 16:43:53.728459 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:53.728513 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:53.728513 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:53.728513 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:53.729140 master-0 kubenswrapper[7604]: I0309 16:43:53.728533 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:54.729263 master-0 kubenswrapper[7604]: I0309 16:43:54.729159 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:54.729263 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:54.729263 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:54.729263 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:54.730062 master-0 kubenswrapper[7604]: I0309 16:43:54.729276 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:55.727902 master-0 kubenswrapper[7604]: I0309 16:43:55.727827 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:55.727902 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:55.727902 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:55.727902 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:55.728274 master-0 kubenswrapper[7604]: I0309 16:43:55.727968 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:56.728040 master-0 kubenswrapper[7604]: I0309 16:43:56.727946 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:56.728040 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:56.728040 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:56.728040 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:56.728658 master-0 kubenswrapper[7604]: I0309 16:43:56.728589 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:57.728568 master-0 kubenswrapper[7604]: I0309 16:43:57.728505 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:57.728568 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:57.728568 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:57.728568 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:57.729243 master-0 kubenswrapper[7604]: I0309 16:43:57.728590 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:58.728090 master-0 kubenswrapper[7604]: I0309 16:43:58.728015 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:58.728090 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:58.728090 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:58.728090 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:58.728442 master-0 kubenswrapper[7604]: I0309 16:43:58.728104 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:43:59.728505 master-0 kubenswrapper[7604]: I0309 16:43:59.728399 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:43:59.728505 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:43:59.728505 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:43:59.728505 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:43:59.729309 master-0 kubenswrapper[7604]: I0309 16:43:59.728526 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:00.728446 master-0 kubenswrapper[7604]: I0309 16:44:00.728311 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:00.728446 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:00.728446 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:00.728446 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:00.729293 master-0 kubenswrapper[7604]: I0309 16:44:00.728470 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:01.728040 master-0 kubenswrapper[7604]: I0309 16:44:01.727962 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:01.728040 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:01.728040 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:01.728040 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:01.728468 master-0 kubenswrapper[7604]: I0309 16:44:01.728049 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:02.729658 master-0 kubenswrapper[7604]: I0309 16:44:02.729523 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:02.729658 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:02.729658 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:02.729658 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:02.730801 master-0 kubenswrapper[7604]: I0309 16:44:02.729661 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:03.729264 master-0 kubenswrapper[7604]: I0309 16:44:03.729182 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:03.729264 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:03.729264 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:03.729264 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:03.729264 master-0 kubenswrapper[7604]: I0309 16:44:03.729263 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:04.727943 master-0 kubenswrapper[7604]: I0309 16:44:04.727890 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:04.727943 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:04.727943 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:04.727943 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:04.728619 master-0 kubenswrapper[7604]: I0309 16:44:04.727958 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:05.729614 master-0 kubenswrapper[7604]: I0309 16:44:05.729511 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:05.729614 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:05.729614 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:05.729614 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:05.729614 master-0 kubenswrapper[7604]: I0309 16:44:05.729614 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:06.729031 master-0 kubenswrapper[7604]: I0309 16:44:06.728929 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:06.729031 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:06.729031 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:06.729031 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:06.729031 master-0 kubenswrapper[7604]: I0309 16:44:06.729026 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:07.729124 master-0 kubenswrapper[7604]: I0309 16:44:07.729013 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:07.729124 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:07.729124 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:07.729124 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:07.730022 master-0 kubenswrapper[7604]: I0309 16:44:07.729371 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:08.729074 master-0 kubenswrapper[7604]: I0309 16:44:08.729003 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:08.729074 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:08.729074 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:08.729074 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:08.729730 master-0 kubenswrapper[7604]: I0309 16:44:08.729108 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:09.728808 master-0 kubenswrapper[7604]: I0309 16:44:09.728711 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:09.728808 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:09.728808 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:09.728808 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:09.729222 master-0 kubenswrapper[7604]: I0309 16:44:09.728816 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:10.728786 master-0 kubenswrapper[7604]: I0309 16:44:10.728716 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:10.728786 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:10.728786 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:10.728786 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:10.729568 master-0 kubenswrapper[7604]: I0309 16:44:10.728804 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:11.728587 master-0 kubenswrapper[7604]: I0309 16:44:11.728498 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:11.728587 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:11.728587 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:11.728587 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:11.728587 master-0 kubenswrapper[7604]: I0309 16:44:11.728580 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:12.729341 master-0 kubenswrapper[7604]: I0309 16:44:12.729254 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:12.729341 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:12.729341 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:12.729341 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:12.729341 master-0 kubenswrapper[7604]: I0309 16:44:12.729340 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:13.727361 master-0 kubenswrapper[7604]: I0309 16:44:13.727303 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:13.727361 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:13.727361 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:13.727361 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:13.727675 master-0 kubenswrapper[7604]: I0309 16:44:13.727373 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:14.727772 master-0 kubenswrapper[7604]: I0309 16:44:14.727688 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:14.727772 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:14.727772 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:14.727772 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:14.728473 master-0 kubenswrapper[7604]: I0309 16:44:14.727775 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:15.729345 master-0 kubenswrapper[7604]: I0309 16:44:15.729246 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:15.729345 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:15.729345 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:15.729345 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:15.730066 master-0 kubenswrapper[7604]: I0309 16:44:15.729363 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:16.728118 master-0 kubenswrapper[7604]: I0309 16:44:16.728063 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:16.728118 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:16.728118 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:16.728118 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:16.728644 master-0 kubenswrapper[7604]: I0309 16:44:16.728603 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:17.728067 master-0 kubenswrapper[7604]: I0309 16:44:17.727990 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:17.728067 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:17.728067 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:17.728067 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:17.729094 master-0 kubenswrapper[7604]: I0309 16:44:17.728101 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:18.727917 master-0 kubenswrapper[7604]: I0309 16:44:18.727853 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:18.727917 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:18.727917 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:18.727917 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:18.727917 master-0 kubenswrapper[7604]: I0309 16:44:18.727916 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:19.728749 master-0 kubenswrapper[7604]: I0309 16:44:19.728667 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:19.728749 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:19.728749 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:19.728749 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:19.728749 master-0 kubenswrapper[7604]: I0309 16:44:19.728727 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:20.728716 master-0 kubenswrapper[7604]: I0309 16:44:20.728640 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:20.728716 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:20.728716 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:20.728716 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:20.728716 master-0 kubenswrapper[7604]: I0309 16:44:20.728717 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:21.728319 master-0 kubenswrapper[7604]: I0309 16:44:21.728237 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:21.728319 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:21.728319 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:21.728319 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:21.728319 master-0 kubenswrapper[7604]: I0309 16:44:21.728320 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:22.728511 master-0 kubenswrapper[7604]: I0309 16:44:22.728409 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:22.728511 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:22.728511 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:22.728511 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:22.729306 master-0 kubenswrapper[7604]: I0309 16:44:22.728526 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:23.729064 master-0 kubenswrapper[7604]: I0309 16:44:23.728983 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:23.729064 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:23.729064 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:23.729064 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:23.729788 master-0 kubenswrapper[7604]: I0309 16:44:23.729085 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:24.728927 master-0 kubenswrapper[7604]: I0309 16:44:24.728845 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:24.728927 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:24.728927 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:24.728927 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:24.729647 master-0 kubenswrapper[7604]: I0309 16:44:24.728952 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:25.728395 master-0 kubenswrapper[7604]: I0309 16:44:25.728332 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:25.728395 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:25.728395 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:25.728395 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:25.728831 master-0 kubenswrapper[7604]: I0309 16:44:25.728411 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:26.728408 master-0 kubenswrapper[7604]: I0309 16:44:26.728303 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:26.728408 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:26.728408 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:26.728408 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:26.729232 master-0 kubenswrapper[7604]: I0309 16:44:26.728455 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:27.728019 master-0 kubenswrapper[7604]: I0309 16:44:27.727935 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:27.728019 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:27.728019 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:27.728019 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:27.728327 master-0 kubenswrapper[7604]: I0309 16:44:27.728025 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:28.728272 master-0 kubenswrapper[7604]: I0309 16:44:28.728160 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:28.728272 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:28.728272 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:28.728272 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:28.729260 master-0 kubenswrapper[7604]: I0309 16:44:28.728481 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:29.728353 master-0 kubenswrapper[7604]: I0309 16:44:29.728260 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:29.728353 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:29.728353 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:29.728353 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:29.729050 master-0 kubenswrapper[7604]: I0309 16:44:29.728383 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:30.728969 master-0 kubenswrapper[7604]: I0309 16:44:30.728905 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:30.728969 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:30.728969 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:30.728969 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:30.730555 master-0 kubenswrapper[7604]: I0309 16:44:30.729593 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:31.727623 master-0 kubenswrapper[7604]: I0309 16:44:31.727537 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:31.727623 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:31.727623 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:31.727623 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:31.727948 master-0 kubenswrapper[7604]: I0309 16:44:31.727631 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:32.728108 master-0 kubenswrapper[7604]: I0309 16:44:32.727953 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:32.728108 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:32.728108 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:32.728108 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:32.729166 master-0 kubenswrapper[7604]: I0309 16:44:32.728107 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:33.728985 master-0 kubenswrapper[7604]: I0309 16:44:33.728872 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:33.728985 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:33.728985 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:33.728985 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:33.728985 master-0 kubenswrapper[7604]: I0309 16:44:33.728946 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:34.728162 master-0 kubenswrapper[7604]: I0309 16:44:34.728070 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:34.728162 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:34.728162 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:34.728162 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:34.728162 master-0 kubenswrapper[7604]: I0309 16:44:34.728168 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:35.728199 master-0 kubenswrapper[7604]: I0309 16:44:35.728107 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:35.728199 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:35.728199 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:35.728199 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:35.728810 master-0 kubenswrapper[7604]: I0309 16:44:35.728216 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:36.728533 master-0 kubenswrapper[7604]: I0309 16:44:36.728469 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:36.728533 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:36.728533 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:36.728533 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:36.729212 master-0 kubenswrapper[7604]: I0309 16:44:36.728537 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:37.729105 master-0 kubenswrapper[7604]: I0309 16:44:37.729030 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:37.729105 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:37.729105 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:37.729105 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:37.729931 master-0 kubenswrapper[7604]: I0309 16:44:37.729128 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:38.728454 master-0 kubenswrapper[7604]: I0309 16:44:38.728359 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:38.728454 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:38.728454 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:38.728454 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:38.728977 master-0 kubenswrapper[7604]: I0309 16:44:38.728473 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:39.728890 master-0 kubenswrapper[7604]: I0309 16:44:39.728803 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:39.728890 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:39.728890 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:39.728890 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:39.729688 master-0 kubenswrapper[7604]: I0309 16:44:39.728933 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:40.729252 master-0 kubenswrapper[7604]: I0309 16:44:40.729154 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:40.729252 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:40.729252 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:40.729252 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:40.730087 master-0 kubenswrapper[7604]: I0309 16:44:40.729263 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:41.728857 master-0 kubenswrapper[7604]: I0309 16:44:41.728773 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:41.728857 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:41.728857 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:41.728857 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:41.729315 master-0 kubenswrapper[7604]: I0309 16:44:41.728877 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:42.730388 master-0 kubenswrapper[7604]: I0309 16:44:42.729776 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:42.730388 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:42.730388 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:42.730388 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:42.731386 master-0 kubenswrapper[7604]: I0309 16:44:42.730471 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:43.728147 master-0 kubenswrapper[7604]: I0309 16:44:43.728107 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:43.728147 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:43.728147 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:43.728147 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:43.728637 master-0 kubenswrapper[7604]: I0309 16:44:43.728605 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:44.728753 master-0 kubenswrapper[7604]: I0309 16:44:44.728634 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:44.728753 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:44.728753 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:44.728753 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:44.729550 master-0 kubenswrapper[7604]: I0309 16:44:44.728759 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:45.728732 master-0 kubenswrapper[7604]: I0309 16:44:45.728655 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:45.728732 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:45.728732 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:45.728732 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:45.729600 master-0 kubenswrapper[7604]: I0309 16:44:45.728763 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:46.727777 master-0 kubenswrapper[7604]: I0309 16:44:46.727714 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:46.727777 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:46.727777 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:46.727777 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:46.728251 master-0 kubenswrapper[7604]: I0309 16:44:46.728217 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:47.728795 master-0 kubenswrapper[7604]: I0309 16:44:47.728718 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:47.728795 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:47.728795 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:47.728795 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:47.729562 master-0 kubenswrapper[7604]: I0309 16:44:47.728823 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:48.728925 master-0 kubenswrapper[7604]: I0309 16:44:48.728844 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:48.728925 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:48.728925 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:48.728925 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:48.729801 master-0 kubenswrapper[7604]: I0309 16:44:48.728932 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:49.728047 master-0 kubenswrapper[7604]: I0309 16:44:49.727948 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:49.728047 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:49.728047 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:49.728047 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:49.728047 master-0 kubenswrapper[7604]: I0309 16:44:49.728030 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:50.727819 master-0 kubenswrapper[7604]: I0309 16:44:50.727752 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:50.727819 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:50.727819 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:50.727819 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:50.727819 master-0 kubenswrapper[7604]: I0309 16:44:50.727827 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:50.814240 master-0 kubenswrapper[7604]: I0309 16:44:50.814185 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 09 16:44:50.814787 master-0 kubenswrapper[7604]: E0309 16:44:50.814769 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8139a33-a597-4038-9bb4-183e72f498b4" containerName="installer" Mar 09 16:44:50.814879 master-0 kubenswrapper[7604]: I0309 16:44:50.814865 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8139a33-a597-4038-9bb4-183e72f498b4" containerName="installer" Mar 09 16:44:50.815105 master-0 kubenswrapper[7604]: I0309 16:44:50.815089 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8139a33-a597-4038-9bb4-183e72f498b4" containerName="installer" Mar 09 16:44:50.815684 master-0 kubenswrapper[7604]: I0309 16:44:50.815665 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.818038 master-0 kubenswrapper[7604]: I0309 16:44:50.817957 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 16:44:50.818038 master-0 kubenswrapper[7604]: I0309 16:44:50.818005 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-cd6zf" Mar 09 16:44:50.833530 master-0 kubenswrapper[7604]: I0309 16:44:50.833473 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 09 16:44:50.837934 master-0 kubenswrapper[7604]: I0309 16:44:50.837871 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-var-lock\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.837934 master-0 kubenswrapper[7604]: I0309 16:44:50.837930 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f9ae99-b357-40c6-923e-78f24eaa5517-kube-api-access\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.838060 master-0 kubenswrapper[7604]: I0309 16:44:50.838029 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.939580 master-0 kubenswrapper[7604]: I0309 16:44:50.939470 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-var-lock\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.939580 master-0 kubenswrapper[7604]: I0309 16:44:50.939557 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f9ae99-b357-40c6-923e-78f24eaa5517-kube-api-access\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.939992 master-0 kubenswrapper[7604]: I0309 16:44:50.939619 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.939992 master-0 kubenswrapper[7604]: I0309 16:44:50.939663 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-var-lock\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.939992 master-0 kubenswrapper[7604]: I0309 16:44:50.939821 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:50.956554 master-0 kubenswrapper[7604]: I0309 16:44:50.956483 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f9ae99-b357-40c6-923e-78f24eaa5517-kube-api-access\") pod \"installer-3-master-0\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:51.133034 master-0 kubenswrapper[7604]: I0309 16:44:51.132907 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:44:51.535306 master-0 kubenswrapper[7604]: I0309 16:44:51.535243 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 09 16:44:51.728082 master-0 kubenswrapper[7604]: I0309 16:44:51.727987 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:51.728082 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:51.728082 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:51.728082 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:51.734876 master-0 kubenswrapper[7604]: I0309 16:44:51.728120 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:52.408795 master-0 kubenswrapper[7604]: I0309 16:44:52.408697 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"98f9ae99-b357-40c6-923e-78f24eaa5517","Type":"ContainerStarted","Data":"a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8"} Mar 09 16:44:52.408795 master-0 kubenswrapper[7604]: I0309 16:44:52.408764 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"98f9ae99-b357-40c6-923e-78f24eaa5517","Type":"ContainerStarted","Data":"d0f00cfe21532d777608aba1f59f3cddb70cb4d69b80290f9389ff2c5f79fc33"} Mar 09 16:44:52.431722 master-0 kubenswrapper[7604]: I0309 16:44:52.431616 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.431593261 podStartE2EDuration="2.431593261s" podCreationTimestamp="2026-03-09 16:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:44:52.429957928 +0000 UTC m=+1149.483927381" watchObservedRunningTime="2026-03-09 16:44:52.431593261 +0000 UTC m=+1149.485562684" Mar 09 16:44:52.728575 master-0 kubenswrapper[7604]: I0309 16:44:52.728509 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:52.728575 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:52.728575 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:52.728575 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:52.729137 master-0 kubenswrapper[7604]: I0309 16:44:52.728601 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:53.727914 master-0 kubenswrapper[7604]: I0309 16:44:53.727817 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:53.727914 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:53.727914 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:53.727914 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:53.728256 master-0 kubenswrapper[7604]: I0309 16:44:53.727930 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:54.728189 master-0 kubenswrapper[7604]: I0309 16:44:54.728098 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:54.728189 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:54.728189 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:54.728189 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:54.728189 master-0 kubenswrapper[7604]: I0309 16:44:54.728189 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:55.729086 master-0 kubenswrapper[7604]: I0309 16:44:55.728940 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:55.729086 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:55.729086 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:55.729086 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:55.729964 master-0 kubenswrapper[7604]: I0309 16:44:55.729073 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:56.728215 master-0 kubenswrapper[7604]: I0309 16:44:56.728148 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:56.728215 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:56.728215 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:56.728215 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:56.728817 master-0 kubenswrapper[7604]: I0309 16:44:56.728774 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:57.727515 master-0 kubenswrapper[7604]: I0309 16:44:57.727453 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:57.727515 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:57.727515 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:57.727515 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:57.728047 master-0 kubenswrapper[7604]: I0309 16:44:57.727536 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:58.728466 master-0 kubenswrapper[7604]: I0309 16:44:58.728374 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:58.728466 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:58.728466 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:58.728466 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:58.729166 master-0 kubenswrapper[7604]: I0309 16:44:58.728484 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:44:59.729269 master-0 kubenswrapper[7604]: I0309 16:44:59.729187 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:44:59.729269 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:44:59.729269 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:44:59.729269 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:44:59.729269 master-0 kubenswrapper[7604]: I0309 16:44:59.729271 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:00.728179 master-0 kubenswrapper[7604]: I0309 16:45:00.728111 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:00.728179 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:00.728179 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:00.728179 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:00.729000 master-0 kubenswrapper[7604]: I0309 16:45:00.728946 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:01.729087 master-0 kubenswrapper[7604]: I0309 16:45:01.728979 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:01.729087 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:01.729087 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:01.729087 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:01.729087 master-0 kubenswrapper[7604]: I0309 16:45:01.729072 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:02.728916 master-0 kubenswrapper[7604]: I0309 16:45:02.728824 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:02.728916 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:02.728916 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:02.728916 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:02.729829 master-0 kubenswrapper[7604]: I0309 16:45:02.728936 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:03.218710 master-0 kubenswrapper[7604]: I0309 16:45:03.218630 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-658vm"] Mar 09 16:45:03.219881 master-0 kubenswrapper[7604]: I0309 16:45:03.219849 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.221817 master-0 kubenswrapper[7604]: I0309 16:45:03.221761 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 09 16:45:03.222272 master-0 kubenswrapper[7604]: I0309 16:45:03.222219 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 09 16:45:03.222511 master-0 kubenswrapper[7604]: I0309 16:45:03.222482 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7zp7c" Mar 09 16:45:03.225327 master-0 kubenswrapper[7604]: I0309 16:45:03.225279 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 09 16:45:03.225486 master-0 kubenswrapper[7604]: I0309 16:45:03.225288 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 09 16:45:03.225486 master-0 kubenswrapper[7604]: I0309 16:45:03.225434 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 09 16:45:03.241341 master-0 kubenswrapper[7604]: I0309 16:45:03.241279 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 09 16:45:03.248709 master-0 kubenswrapper[7604]: I0309 16:45:03.248640 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-658vm"] Mar 09 16:45:03.308903 master-0 kubenswrapper[7604]: I0309 16:45:03.308835 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309132 master-0 kubenswrapper[7604]: I0309 16:45:03.308931 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309132 master-0 kubenswrapper[7604]: I0309 16:45:03.308972 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309132 master-0 kubenswrapper[7604]: I0309 16:45:03.309002 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309132 master-0 kubenswrapper[7604]: I0309 16:45:03.309024 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309132 master-0 kubenswrapper[7604]: I0309 16:45:03.309050 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309132 master-0 kubenswrapper[7604]: I0309 16:45:03.309076 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkfn\" (UniqueName: \"kubernetes.io/projected/79a8ea87-c29a-4248-927f-6f1acfc494d7-kube-api-access-rvkfn\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.309322 master-0 kubenswrapper[7604]: I0309 16:45:03.309137 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410164 master-0 kubenswrapper[7604]: I0309 16:45:03.410086 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410164 master-0 kubenswrapper[7604]: I0309 16:45:03.410169 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410473 master-0 kubenswrapper[7604]: I0309 16:45:03.410209 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410473 master-0 kubenswrapper[7604]: I0309 16:45:03.410231 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410473 master-0 kubenswrapper[7604]: I0309 16:45:03.410265 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410473 master-0 kubenswrapper[7604]: I0309 16:45:03.410289 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkfn\" (UniqueName: \"kubernetes.io/projected/79a8ea87-c29a-4248-927f-6f1acfc494d7-kube-api-access-rvkfn\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410473 master-0 kubenswrapper[7604]: I0309 16:45:03.410314 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.410473 master-0 kubenswrapper[7604]: I0309 16:45:03.410344 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.411560 master-0 kubenswrapper[7604]: I0309 16:45:03.411539 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.413467 master-0 kubenswrapper[7604]: I0309 16:45:03.413415 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.413674 master-0 kubenswrapper[7604]: I0309 16:45:03.413627 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.414566 master-0 kubenswrapper[7604]: I0309 16:45:03.414538 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.414774 master-0 kubenswrapper[7604]: I0309 16:45:03.414753 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.414919 master-0 kubenswrapper[7604]: I0309 16:45:03.414871 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.417447 master-0 kubenswrapper[7604]: I0309 16:45:03.415294 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.435344 master-0 kubenswrapper[7604]: I0309 16:45:03.435299 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkfn\" (UniqueName: \"kubernetes.io/projected/79a8ea87-c29a-4248-927f-6f1acfc494d7-kube-api-access-rvkfn\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.537406 master-0 kubenswrapper[7604]: I0309 16:45:03.537273 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:45:03.728739 master-0 kubenswrapper[7604]: I0309 16:45:03.728670 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:03.728739 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:03.728739 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:03.728739 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:03.729063 master-0 kubenswrapper[7604]: I0309 16:45:03.728751 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:03.975285 master-0 kubenswrapper[7604]: I0309 16:45:03.975222 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d4f6dc665-658vm"] Mar 09 16:45:03.979311 master-0 kubenswrapper[7604]: W0309 16:45:03.979256 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a8ea87_c29a_4248_927f_6f1acfc494d7.slice/crio-8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae WatchSource:0}: Error finding container 8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae: Status 404 returned error can't find the container with id 8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae Mar 09 16:45:04.490265 master-0 kubenswrapper[7604]: I0309 16:45:04.490081 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" event={"ID":"79a8ea87-c29a-4248-927f-6f1acfc494d7","Type":"ContainerStarted","Data":"3177a8525228159777047e47ec17d0aad07266e1416aad8089ee1bfae5a31c96"} Mar 09 16:45:04.490265 master-0 kubenswrapper[7604]: I0309 16:45:04.490137 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" event={"ID":"79a8ea87-c29a-4248-927f-6f1acfc494d7","Type":"ContainerStarted","Data":"69315581481f76e7cb8bdfdc5bbccf212e3d336a3bb5df93c6b7bcaa628ee4a5"} Mar 09 16:45:04.490265 master-0 kubenswrapper[7604]: I0309 16:45:04.490152 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" event={"ID":"79a8ea87-c29a-4248-927f-6f1acfc494d7","Type":"ContainerStarted","Data":"8d71b482955e5ec23074b8de06899376ac5cf255c902311d9a2dcd62762d8bc5"} Mar 09 16:45:04.490265 master-0 kubenswrapper[7604]: I0309 16:45:04.490164 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" event={"ID":"79a8ea87-c29a-4248-927f-6f1acfc494d7","Type":"ContainerStarted","Data":"8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae"} Mar 09 16:45:04.515706 master-0 kubenswrapper[7604]: I0309 16:45:04.515609 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" podStartSLOduration=1.515588862 podStartE2EDuration="1.515588862s" podCreationTimestamp="2026-03-09 16:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:45:04.512695475 +0000 UTC m=+1161.566664908" watchObservedRunningTime="2026-03-09 16:45:04.515588862 +0000 UTC m=+1161.569558285" Mar 09 16:45:04.728694 master-0 kubenswrapper[7604]: I0309 16:45:04.728650 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:04.728694 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:04.728694 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:04.728694 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:04.729086 master-0 kubenswrapper[7604]: I0309 16:45:04.729062 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:05.729094 master-0 kubenswrapper[7604]: I0309 16:45:05.729016 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:05.729094 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:05.729094 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:05.729094 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:05.729094 master-0 kubenswrapper[7604]: I0309 16:45:05.729088 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:06.608186 master-0 kubenswrapper[7604]: I0309 16:45:06.608094 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 09 16:45:06.608623 master-0 kubenswrapper[7604]: I0309 16:45:06.608412 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="98f9ae99-b357-40c6-923e-78f24eaa5517" containerName="installer" containerID="cri-o://a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8" gracePeriod=30 Mar 09 16:45:06.727781 master-0 kubenswrapper[7604]: I0309 16:45:06.727695 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:06.727781 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:06.727781 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:06.727781 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:06.727781 master-0 kubenswrapper[7604]: I0309 16:45:06.727765 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:07.727477 master-0 kubenswrapper[7604]: I0309 16:45:07.727329 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:07.727477 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:07.727477 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:07.727477 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:07.727477 master-0 kubenswrapper[7604]: I0309 16:45:07.727417 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:08.728839 master-0 kubenswrapper[7604]: I0309 16:45:08.728764 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:08.728839 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:08.728839 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:08.728839 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:08.729600 master-0 kubenswrapper[7604]: I0309 16:45:08.728868 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:09.728695 master-0 kubenswrapper[7604]: I0309 16:45:09.728617 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:09.728695 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:09.728695 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:09.728695 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:09.729516 master-0 kubenswrapper[7604]: I0309 16:45:09.728704 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:09.812001 master-0 kubenswrapper[7604]: I0309 16:45:09.811464 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 09 16:45:09.812743 master-0 kubenswrapper[7604]: I0309 16:45:09.812714 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:09.830477 master-0 kubenswrapper[7604]: I0309 16:45:09.829782 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 09 16:45:10.006615 master-0 kubenswrapper[7604]: I0309 16:45:10.006448 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.006615 master-0 kubenswrapper[7604]: I0309 16:45:10.006565 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.006615 master-0 kubenswrapper[7604]: I0309 16:45:10.006600 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.107666 master-0 kubenswrapper[7604]: I0309 16:45:10.107593 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.107666 master-0 kubenswrapper[7604]: I0309 16:45:10.107666 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.107976 master-0 kubenswrapper[7604]: I0309 16:45:10.107838 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.107976 master-0 kubenswrapper[7604]: I0309 16:45:10.107889 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.107976 master-0 kubenswrapper[7604]: I0309 16:45:10.107945 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.131032 master-0 kubenswrapper[7604]: I0309 16:45:10.130946 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.139872 master-0 kubenswrapper[7604]: I0309 16:45:10.139812 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:45:10.528649 master-0 kubenswrapper[7604]: I0309 16:45:10.528612 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 09 16:45:10.727890 master-0 kubenswrapper[7604]: I0309 16:45:10.727834 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:10.727890 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:10.727890 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:10.727890 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:10.728368 master-0 kubenswrapper[7604]: I0309 16:45:10.727900 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:11.537646 master-0 kubenswrapper[7604]: I0309 16:45:11.537554 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"696fcca2-df1a-491d-956d-1cfda1ee5e70","Type":"ContainerStarted","Data":"f0361f83355d67a2e316e3ff34c657a94b865183e5a166fa44ab20e7b17b6c77"} Mar 09 16:45:11.538157 master-0 kubenswrapper[7604]: I0309 16:45:11.537657 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"696fcca2-df1a-491d-956d-1cfda1ee5e70","Type":"ContainerStarted","Data":"48ea3b1c1a43df7f7909a26935d767da157bf2e1b5a1c65e482d9227e70712b8"} Mar 09 16:45:11.557553 master-0 kubenswrapper[7604]: I0309 16:45:11.557453 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.557417825 podStartE2EDuration="2.557417825s" podCreationTimestamp="2026-03-09 16:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:45:11.551354422 +0000 UTC m=+1168.605323845" watchObservedRunningTime="2026-03-09 16:45:11.557417825 +0000 UTC m=+1168.611387248" Mar 09 16:45:11.729391 master-0 kubenswrapper[7604]: I0309 16:45:11.729282 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:11.729391 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:11.729391 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:11.729391 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:11.729391 master-0 kubenswrapper[7604]: I0309 16:45:11.729347 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:12.728940 master-0 kubenswrapper[7604]: I0309 16:45:12.728853 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:12.728940 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:12.728940 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:12.728940 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:12.730162 master-0 kubenswrapper[7604]: I0309 16:45:12.728960 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:13.556394 master-0 kubenswrapper[7604]: I0309 16:45:13.556332 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/5.log" Mar 09 16:45:13.557473 master-0 kubenswrapper[7604]: I0309 16:45:13.557377 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/4.log" Mar 09 16:45:13.558068 master-0 kubenswrapper[7604]: I0309 16:45:13.558000 7604 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" exitCode=1 Mar 09 16:45:13.558214 master-0 kubenswrapper[7604]: I0309 16:45:13.558073 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerDied","Data":"3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023"} Mar 09 16:45:13.558214 master-0 kubenswrapper[7604]: I0309 16:45:13.558171 7604 scope.go:117] "RemoveContainer" containerID="656468d44b2ac64c93704b39a4b851c38553e111f5bafb24330029728182fba0" Mar 09 16:45:13.559192 master-0 kubenswrapper[7604]: I0309 16:45:13.559128 7604 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:45:13.559700 master-0 kubenswrapper[7604]: E0309 16:45:13.559636 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:45:13.727763 master-0 kubenswrapper[7604]: I0309 16:45:13.727703 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:13.727763 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:13.727763 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:13.727763 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:13.728112 master-0 kubenswrapper[7604]: I0309 16:45:13.727774 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:14.565895 master-0 kubenswrapper[7604]: I0309 16:45:14.565828 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/5.log" Mar 09 16:45:14.727191 master-0 kubenswrapper[7604]: I0309 16:45:14.727118 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:14.727191 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:14.727191 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:14.727191 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:14.727582 master-0 kubenswrapper[7604]: I0309 16:45:14.727189 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:15.728455 master-0 kubenswrapper[7604]: I0309 16:45:15.728209 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:15.728455 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:15.728455 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:15.728455 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:15.728455 master-0 kubenswrapper[7604]: I0309 16:45:15.728289 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:16.728351 master-0 kubenswrapper[7604]: I0309 16:45:16.728015 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:16.728351 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:16.728351 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:16.728351 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:16.728351 master-0 kubenswrapper[7604]: I0309 16:45:16.728104 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:17.729249 master-0 kubenswrapper[7604]: I0309 16:45:17.729171 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:17.729249 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:17.729249 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:17.729249 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:17.729985 master-0 kubenswrapper[7604]: I0309 16:45:17.729254 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:18.727705 master-0 kubenswrapper[7604]: I0309 16:45:18.727636 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:18.727705 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:18.727705 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:18.727705 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:18.727705 master-0 kubenswrapper[7604]: I0309 16:45:18.727709 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:19.727689 master-0 kubenswrapper[7604]: I0309 16:45:19.727600 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:45:19.727689 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:45:19.727689 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:45:19.727689 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:45:19.727689 master-0 kubenswrapper[7604]: I0309 16:45:19.727675 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:45:19.728306 master-0 kubenswrapper[7604]: I0309 16:45:19.727726 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:45:19.728347 master-0 kubenswrapper[7604]: I0309 16:45:19.728316 7604 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"34d69e01c0df1a8808ea1e61ee678a2f4eb359f9a66a8c80ee688b834fc7aa8b"} pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" containerMessage="Container router failed startup probe, will be restarted" Mar 09 16:45:19.728386 master-0 kubenswrapper[7604]: I0309 16:45:19.728346 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" containerID="cri-o://34d69e01c0df1a8808ea1e61ee678a2f4eb359f9a66a8c80ee688b834fc7aa8b" gracePeriod=3600 Mar 09 16:45:22.906465 master-0 kubenswrapper[7604]: I0309 16:45:22.906249 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_98f9ae99-b357-40c6-923e-78f24eaa5517/installer/0.log" Mar 09 16:45:22.906465 master-0 kubenswrapper[7604]: I0309 16:45:22.906311 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:45:23.076415 master-0 kubenswrapper[7604]: I0309 16:45:23.076311 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f9ae99-b357-40c6-923e-78f24eaa5517-kube-api-access\") pod \"98f9ae99-b357-40c6-923e-78f24eaa5517\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " Mar 09 16:45:23.076654 master-0 kubenswrapper[7604]: I0309 16:45:23.076464 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-kubelet-dir\") pod \"98f9ae99-b357-40c6-923e-78f24eaa5517\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " Mar 09 16:45:23.076654 master-0 kubenswrapper[7604]: I0309 16:45:23.076557 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-var-lock\") pod \"98f9ae99-b357-40c6-923e-78f24eaa5517\" (UID: \"98f9ae99-b357-40c6-923e-78f24eaa5517\") " Mar 09 16:45:23.076654 master-0 kubenswrapper[7604]: I0309 16:45:23.076584 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98f9ae99-b357-40c6-923e-78f24eaa5517" (UID: "98f9ae99-b357-40c6-923e-78f24eaa5517"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:45:23.076793 master-0 kubenswrapper[7604]: I0309 16:45:23.076714 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-var-lock" (OuterVolumeSpecName: "var-lock") pod "98f9ae99-b357-40c6-923e-78f24eaa5517" (UID: "98f9ae99-b357-40c6-923e-78f24eaa5517"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:45:23.076907 master-0 kubenswrapper[7604]: I0309 16:45:23.076866 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:45:23.076907 master-0 kubenswrapper[7604]: I0309 16:45:23.076897 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98f9ae99-b357-40c6-923e-78f24eaa5517-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:45:23.079328 master-0 kubenswrapper[7604]: I0309 16:45:23.079285 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f9ae99-b357-40c6-923e-78f24eaa5517-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98f9ae99-b357-40c6-923e-78f24eaa5517" (UID: "98f9ae99-b357-40c6-923e-78f24eaa5517"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:45:23.178604 master-0 kubenswrapper[7604]: I0309 16:45:23.178513 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f9ae99-b357-40c6-923e-78f24eaa5517-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:45:23.632824 master-0 kubenswrapper[7604]: I0309 16:45:23.632776 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_98f9ae99-b357-40c6-923e-78f24eaa5517/installer/0.log" Mar 09 16:45:23.633064 master-0 kubenswrapper[7604]: I0309 16:45:23.632834 7604 generic.go:334] "Generic (PLEG): container finished" podID="98f9ae99-b357-40c6-923e-78f24eaa5517" containerID="a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8" exitCode=1 Mar 09 16:45:23.633064 master-0 kubenswrapper[7604]: I0309 16:45:23.632867 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"98f9ae99-b357-40c6-923e-78f24eaa5517","Type":"ContainerDied","Data":"a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8"} Mar 09 16:45:23.633064 master-0 kubenswrapper[7604]: I0309 16:45:23.632904 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"98f9ae99-b357-40c6-923e-78f24eaa5517","Type":"ContainerDied","Data":"d0f00cfe21532d777608aba1f59f3cddb70cb4d69b80290f9389ff2c5f79fc33"} Mar 09 16:45:23.633064 master-0 kubenswrapper[7604]: I0309 16:45:23.632924 7604 scope.go:117] "RemoveContainer" containerID="a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8" Mar 09 16:45:23.633064 master-0 kubenswrapper[7604]: I0309 16:45:23.632950 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 09 16:45:23.648050 master-0 kubenswrapper[7604]: I0309 16:45:23.648008 7604 scope.go:117] "RemoveContainer" containerID="a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8" Mar 09 16:45:23.648605 master-0 kubenswrapper[7604]: E0309 16:45:23.648569 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8\": container with ID starting with a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8 not found: ID does not exist" containerID="a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8" Mar 09 16:45:23.648685 master-0 kubenswrapper[7604]: I0309 16:45:23.648608 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8"} err="failed to get container status \"a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8\": rpc error: code = NotFound desc = could not find container \"a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8\": container with ID starting with a16b77592a76be2aad315049d08fac692207ad057a47d9aa790362252d949cf8 not found: ID does not exist" Mar 09 16:45:23.655184 master-0 kubenswrapper[7604]: I0309 16:45:23.655128 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 09 16:45:23.662841 master-0 kubenswrapper[7604]: I0309 16:45:23.662775 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 09 16:45:24.110823 master-0 kubenswrapper[7604]: I0309 16:45:24.110776 7604 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:45:24.111542 master-0 kubenswrapper[7604]: E0309 16:45:24.111048 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:45:24.199028 master-0 kubenswrapper[7604]: I0309 16:45:24.198940 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 09 16:45:24.199280 master-0 kubenswrapper[7604]: E0309 16:45:24.199258 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f9ae99-b357-40c6-923e-78f24eaa5517" containerName="installer" Mar 09 16:45:24.199280 master-0 kubenswrapper[7604]: I0309 16:45:24.199277 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f9ae99-b357-40c6-923e-78f24eaa5517" containerName="installer" Mar 09 16:45:24.199456 master-0 kubenswrapper[7604]: I0309 16:45:24.199403 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f9ae99-b357-40c6-923e-78f24eaa5517" containerName="installer" Mar 09 16:45:24.200184 master-0 kubenswrapper[7604]: I0309 16:45:24.199949 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.202356 master-0 kubenswrapper[7604]: I0309 16:45:24.202319 7604 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cshl6" Mar 09 16:45:24.202726 master-0 kubenswrapper[7604]: I0309 16:45:24.202686 7604 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 09 16:45:24.211250 master-0 kubenswrapper[7604]: I0309 16:45:24.211208 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 09 16:45:24.296236 master-0 kubenswrapper[7604]: I0309 16:45:24.296171 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.296575 master-0 kubenswrapper[7604]: I0309 16:45:24.296247 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.296575 master-0 kubenswrapper[7604]: I0309 16:45:24.296306 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.397839 master-0 kubenswrapper[7604]: I0309 16:45:24.397241 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.397839 master-0 kubenswrapper[7604]: I0309 16:45:24.397353 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.397839 master-0 kubenswrapper[7604]: I0309 16:45:24.397399 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.397839 master-0 kubenswrapper[7604]: I0309 16:45:24.397524 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.397839 master-0 kubenswrapper[7604]: I0309 16:45:24.397689 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.414975 master-0 kubenswrapper[7604]: I0309 16:45:24.414898 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.520394 master-0 kubenswrapper[7604]: I0309 16:45:24.519968 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:24.927473 master-0 kubenswrapper[7604]: I0309 16:45:24.927353 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 09 16:45:25.120068 master-0 kubenswrapper[7604]: I0309 16:45:25.119755 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f9ae99-b357-40c6-923e-78f24eaa5517" path="/var/lib/kubelet/pods/98f9ae99-b357-40c6-923e-78f24eaa5517/volumes" Mar 09 16:45:25.648741 master-0 kubenswrapper[7604]: I0309 16:45:25.648691 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4","Type":"ContainerStarted","Data":"7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1"} Mar 09 16:45:25.648741 master-0 kubenswrapper[7604]: I0309 16:45:25.648744 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4","Type":"ContainerStarted","Data":"780ab87267d09a817c8af70d196d52705930bc50d893178b79a2f3daaac2986b"} Mar 09 16:45:25.665079 master-0 kubenswrapper[7604]: I0309 16:45:25.664984 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" podStartSLOduration=1.664954343 podStartE2EDuration="1.664954343s" podCreationTimestamp="2026-03-09 16:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:45:25.663835884 +0000 UTC m=+1182.717805327" watchObservedRunningTime="2026-03-09 16:45:25.664954343 +0000 UTC m=+1182.718923766" Mar 09 16:45:32.798333 master-0 kubenswrapper[7604]: I0309 16:45:32.798190 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 09 16:45:32.798982 master-0 kubenswrapper[7604]: I0309 16:45:32.798440 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" podUID="f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" containerName="installer" containerID="cri-o://7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1" gracePeriod=30 Mar 09 16:45:34.798060 master-0 kubenswrapper[7604]: I0309 16:45:34.797962 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 09 16:45:34.799024 master-0 kubenswrapper[7604]: I0309 16:45:34.798987 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:34.811389 master-0 kubenswrapper[7604]: I0309 16:45:34.811331 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 09 16:45:34.941705 master-0 kubenswrapper[7604]: I0309 16:45:34.941643 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:34.941705 master-0 kubenswrapper[7604]: I0309 16:45:34.941700 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:34.941994 master-0 kubenswrapper[7604]: I0309 16:45:34.941814 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.042788 master-0 kubenswrapper[7604]: I0309 16:45:35.042728 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.042788 master-0 kubenswrapper[7604]: I0309 16:45:35.042793 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.043216 master-0 kubenswrapper[7604]: I0309 16:45:35.042813 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.043216 master-0 kubenswrapper[7604]: I0309 16:45:35.042884 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.043216 master-0 kubenswrapper[7604]: I0309 16:45:35.042965 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.059133 master-0 kubenswrapper[7604]: I0309 16:45:35.058986 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.118596 master-0 kubenswrapper[7604]: I0309 16:45:35.118544 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:45:35.525811 master-0 kubenswrapper[7604]: I0309 16:45:35.525767 7604 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 09 16:45:35.710630 master-0 kubenswrapper[7604]: I0309 16:45:35.710570 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e","Type":"ContainerStarted","Data":"ca6c68bebab4667be94ed8d4950c8443a1dc101549e30dea2fc49d8db92f1da8"} Mar 09 16:45:36.112288 master-0 kubenswrapper[7604]: I0309 16:45:36.112114 7604 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:45:36.113604 master-0 kubenswrapper[7604]: E0309 16:45:36.113256 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:45:36.720530 master-0 kubenswrapper[7604]: I0309 16:45:36.720380 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e","Type":"ContainerStarted","Data":"4bd1e152391019fc30761bea1a52c716092ae04ae17eaec109956953b77c5f4d"} Mar 09 16:45:36.740982 master-0 kubenswrapper[7604]: I0309 16:45:36.740855 7604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=2.740819732 podStartE2EDuration="2.740819732s" podCreationTimestamp="2026-03-09 16:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:45:36.736941208 +0000 UTC m=+1193.790910631" watchObservedRunningTime="2026-03-09 16:45:36.740819732 +0000 UTC m=+1193.794789155" Mar 09 16:45:47.110468 master-0 kubenswrapper[7604]: I0309 16:45:47.110400 7604 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:45:47.111182 master-0 kubenswrapper[7604]: E0309 16:45:47.110703 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:45:56.244859 master-0 kubenswrapper[7604]: I0309 16:45:56.244791 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-retry-1-master-0_f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4/installer/0.log" Mar 09 16:45:56.245476 master-0 kubenswrapper[7604]: I0309 16:45:56.244888 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:56.356970 master-0 kubenswrapper[7604]: I0309 16:45:56.356868 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kubelet-dir\") pod \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " Mar 09 16:45:56.357188 master-0 kubenswrapper[7604]: I0309 16:45:56.357014 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" (UID: "f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:45:56.357188 master-0 kubenswrapper[7604]: I0309 16:45:56.357061 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-var-lock\") pod \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " Mar 09 16:45:56.357188 master-0 kubenswrapper[7604]: I0309 16:45:56.357156 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-var-lock" (OuterVolumeSpecName: "var-lock") pod "f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" (UID: "f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:45:56.357319 master-0 kubenswrapper[7604]: I0309 16:45:56.357177 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kube-api-access\") pod \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\" (UID: \"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4\") " Mar 09 16:45:56.357756 master-0 kubenswrapper[7604]: I0309 16:45:56.357721 7604 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:45:56.357756 master-0 kubenswrapper[7604]: I0309 16:45:56.357753 7604 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:45:56.360455 master-0 kubenswrapper[7604]: I0309 16:45:56.360398 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" (UID: "f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:45:56.458874 master-0 kubenswrapper[7604]: I0309 16:45:56.458764 7604 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:45:56.873015 master-0 kubenswrapper[7604]: I0309 16:45:56.872908 7604 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-retry-1-master-0_f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4/installer/0.log" Mar 09 16:45:56.873258 master-0 kubenswrapper[7604]: I0309 16:45:56.873237 7604 generic.go:334] "Generic (PLEG): container finished" podID="f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" containerID="7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1" exitCode=1 Mar 09 16:45:56.873349 master-0 kubenswrapper[7604]: I0309 16:45:56.873331 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4","Type":"ContainerDied","Data":"7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1"} Mar 09 16:45:56.873451 master-0 kubenswrapper[7604]: I0309 16:45:56.873416 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4","Type":"ContainerDied","Data":"780ab87267d09a817c8af70d196d52705930bc50d893178b79a2f3daaac2986b"} Mar 09 16:45:56.873553 master-0 kubenswrapper[7604]: I0309 16:45:56.873499 7604 scope.go:117] "RemoveContainer" containerID="7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1" Mar 09 16:45:56.873710 master-0 kubenswrapper[7604]: I0309 16:45:56.873350 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 09 16:45:56.892169 master-0 kubenswrapper[7604]: I0309 16:45:56.892114 7604 scope.go:117] "RemoveContainer" containerID="7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1" Mar 09 16:45:56.893001 master-0 kubenswrapper[7604]: E0309 16:45:56.892928 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1\": container with ID starting with 7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1 not found: ID does not exist" containerID="7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1" Mar 09 16:45:56.893084 master-0 kubenswrapper[7604]: I0309 16:45:56.893020 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1"} err="failed to get container status \"7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1\": rpc error: code = NotFound desc = could not find container \"7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1\": container with ID starting with 7f9bfa20a18f3a58083e45d9b65439151795111b4e195d42426c6a2a49679cc1 not found: ID does not exist" Mar 09 16:45:56.916917 master-0 kubenswrapper[7604]: I0309 16:45:56.916835 7604 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 09 16:45:56.924017 master-0 kubenswrapper[7604]: I0309 16:45:56.923936 7604 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 09 16:45:57.120853 master-0 kubenswrapper[7604]: I0309 16:45:57.120755 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" path="/var/lib/kubelet/pods/f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4/volumes" Mar 09 16:45:58.111022 master-0 kubenswrapper[7604]: I0309 16:45:58.110922 7604 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:45:58.111770 master-0 kubenswrapper[7604]: E0309 16:45:58.111345 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:46:05.948965 master-0 kubenswrapper[7604]: I0309 16:46:05.948902 7604 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="34d69e01c0df1a8808ea1e61ee678a2f4eb359f9a66a8c80ee688b834fc7aa8b" exitCode=0 Mar 09 16:46:05.948965 master-0 kubenswrapper[7604]: I0309 16:46:05.948964 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerDied","Data":"34d69e01c0df1a8808ea1e61ee678a2f4eb359f9a66a8c80ee688b834fc7aa8b"} Mar 09 16:46:05.956479 master-0 kubenswrapper[7604]: I0309 16:46:05.949012 7604 scope.go:117] "RemoveContainer" containerID="0a6dcd96dc0badcacb59d76f3cf7625d66b40a5e5d0f154a56f4d766f6cd06e0" Mar 09 16:46:06.958624 master-0 kubenswrapper[7604]: I0309 16:46:06.958528 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" event={"ID":"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56","Type":"ContainerStarted","Data":"8ee87bbf602f5d4393c700e411a026d7c27ca2d54ea3700281a5559d142c1667"} Mar 09 16:46:07.725919 master-0 kubenswrapper[7604]: I0309 16:46:07.725827 7604 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:07.737402 master-0 kubenswrapper[7604]: I0309 16:46:07.737257 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:07.737402 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:07.737402 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:07.737402 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:07.737752 master-0 kubenswrapper[7604]: I0309 16:46:07.737611 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:08.560718 master-0 kubenswrapper[7604]: I0309 16:46:08.560614 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:46:08.561478 master-0 kubenswrapper[7604]: E0309 16:46:08.561086 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" containerName="installer" Mar 09 16:46:08.561478 master-0 kubenswrapper[7604]: I0309 16:46:08.561106 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" containerName="installer" Mar 09 16:46:08.561478 master-0 kubenswrapper[7604]: I0309 16:46:08.561282 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f20d454f-4fa1-4ab5-8ea0-cdb0265ec1e4" containerName="installer" Mar 09 16:46:08.561985 master-0 kubenswrapper[7604]: I0309 16:46:08.561950 7604 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 09 16:46:08.562160 master-0 kubenswrapper[7604]: I0309 16:46:08.562097 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.562512 master-0 kubenswrapper[7604]: I0309 16:46:08.562406 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c" gracePeriod=15 Mar 09 16:46:08.562740 master-0 kubenswrapper[7604]: I0309 16:46:08.562639 7604 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e" gracePeriod=15 Mar 09 16:46:08.564993 master-0 kubenswrapper[7604]: I0309 16:46:08.564951 7604 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 09 16:46:08.565301 master-0 kubenswrapper[7604]: E0309 16:46:08.565268 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 09 16:46:08.565301 master-0 kubenswrapper[7604]: I0309 16:46:08.565295 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 09 16:46:08.565487 master-0 kubenswrapper[7604]: E0309 16:46:08.565387 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 09 16:46:08.565487 master-0 kubenswrapper[7604]: I0309 16:46:08.565407 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 09 16:46:08.565487 master-0 kubenswrapper[7604]: E0309 16:46:08.565455 7604 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 09 16:46:08.565487 master-0 kubenswrapper[7604]: I0309 16:46:08.565464 7604 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 09 16:46:08.565837 master-0 kubenswrapper[7604]: I0309 16:46:08.565666 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 09 16:46:08.565837 master-0 kubenswrapper[7604]: I0309 16:46:08.565689 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 09 16:46:08.565837 master-0 kubenswrapper[7604]: I0309 16:46:08.565707 7604 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 09 16:46:08.568087 master-0 kubenswrapper[7604]: I0309 16:46:08.567760 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.634169 master-0 kubenswrapper[7604]: I0309 16:46:08.634044 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:46:08.637547 master-0 kubenswrapper[7604]: I0309 16:46:08.637458 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.637630 master-0 kubenswrapper[7604]: I0309 16:46:08.637567 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.637703 master-0 kubenswrapper[7604]: I0309 16:46:08.637681 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.637742 master-0 kubenswrapper[7604]: I0309 16:46:08.637717 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.637778 master-0 kubenswrapper[7604]: I0309 16:46:08.637768 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.637891 master-0 kubenswrapper[7604]: I0309 16:46:08.637863 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.638110 master-0 kubenswrapper[7604]: I0309 16:46:08.638017 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.638110 master-0 kubenswrapper[7604]: I0309 16:46:08.638106 7604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.649403 master-0 kubenswrapper[7604]: I0309 16:46:08.649328 7604 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 09 16:46:08.728411 master-0 kubenswrapper[7604]: I0309 16:46:08.727313 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:08.734739 master-0 kubenswrapper[7604]: I0309 16:46:08.734477 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:08.734739 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:08.734739 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:08.734739 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:08.734739 master-0 kubenswrapper[7604]: I0309 16:46:08.734638 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:08.739846 master-0 kubenswrapper[7604]: I0309 16:46:08.739682 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.739846 master-0 kubenswrapper[7604]: I0309 16:46:08.739779 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.739846 master-0 kubenswrapper[7604]: I0309 16:46:08.739812 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.740123 master-0 kubenswrapper[7604]: I0309 16:46:08.739867 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.740123 master-0 kubenswrapper[7604]: I0309 16:46:08.739902 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.740123 master-0 kubenswrapper[7604]: I0309 16:46:08.739953 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.740123 master-0 kubenswrapper[7604]: I0309 16:46:08.739979 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.740123 master-0 kubenswrapper[7604]: I0309 16:46:08.740019 7604 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.740292 master-0 kubenswrapper[7604]: I0309 16:46:08.740161 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.740292 master-0 kubenswrapper[7604]: I0309 16:46:08.740231 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.740292 master-0 kubenswrapper[7604]: I0309 16:46:08.740268 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.740410 master-0 kubenswrapper[7604]: I0309 16:46:08.740302 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.740410 master-0 kubenswrapper[7604]: I0309 16:46:08.740335 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.740410 master-0 kubenswrapper[7604]: I0309 16:46:08.740372 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.760457 master-0 kubenswrapper[7604]: I0309 16:46:08.740411 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.760457 master-0 kubenswrapper[7604]: I0309 16:46:08.740584 7604 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.926389 master-0 kubenswrapper[7604]: I0309 16:46:08.926225 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:08.946897 master-0 kubenswrapper[7604]: I0309 16:46:08.946832 7604 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:08.962339 master-0 kubenswrapper[7604]: E0309 16:46:08.962150 7604 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189b3a1bb41e2d50 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:3a18cac8a90d6913a6a0391d805cddc9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:46:08.960826704 +0000 UTC m=+1226.014796127,LastTimestamp:2026-03-09 16:46:08.960826704 +0000 UTC m=+1226.014796127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:46:08.972957 master-0 kubenswrapper[7604]: W0309 16:46:08.972890 7604 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48512e02022680c9d90092634f0fc146.slice/crio-fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a WatchSource:0}: Error finding container fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a: Status 404 returned error can't find the container with id fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a Mar 09 16:46:08.976873 master-0 kubenswrapper[7604]: I0309 16:46:08.976836 7604 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e" exitCode=0 Mar 09 16:46:08.978890 master-0 kubenswrapper[7604]: I0309 16:46:08.978836 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"b43b8b247bcf7dd91e3dade29e3c0373e4989b5f279bccec521a6e0e7ca4f4e0"} Mar 09 16:46:09.002844 master-0 kubenswrapper[7604]: I0309 16:46:09.002779 7604 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 09 16:46:09.003072 master-0 kubenswrapper[7604]: I0309 16:46:09.002861 7604 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.387972 master-0 kubenswrapper[7604]: E0309 16:46:09.387880 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.389484 master-0 kubenswrapper[7604]: E0309 16:46:09.389393 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.390282 master-0 kubenswrapper[7604]: E0309 16:46:09.390229 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.391023 master-0 kubenswrapper[7604]: E0309 16:46:09.390994 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.391771 master-0 kubenswrapper[7604]: E0309 16:46:09.391706 7604 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.391771 master-0 kubenswrapper[7604]: I0309 16:46:09.391769 7604 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 09 16:46:09.392337 master-0 kubenswrapper[7604]: E0309 16:46:09.392294 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 09 16:46:09.594200 master-0 kubenswrapper[7604]: E0309 16:46:09.594073 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 09 16:46:09.729941 master-0 kubenswrapper[7604]: I0309 16:46:09.729834 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:09.729941 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:09.729941 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:09.729941 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:09.730239 master-0 kubenswrapper[7604]: I0309 16:46:09.729997 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:09.989513 master-0 kubenswrapper[7604]: I0309 16:46:09.989411 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5"} Mar 09 16:46:09.990965 master-0 kubenswrapper[7604]: I0309 16:46:09.990916 7604 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.992017 master-0 kubenswrapper[7604]: I0309 16:46:09.991938 7604 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.992639 master-0 kubenswrapper[7604]: I0309 16:46:09.992584 7604 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497" exitCode=0 Mar 09 16:46:09.992747 master-0 kubenswrapper[7604]: I0309 16:46:09.992689 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497"} Mar 09 16:46:09.992832 master-0 kubenswrapper[7604]: I0309 16:46:09.992748 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a"} Mar 09 16:46:09.994046 master-0 kubenswrapper[7604]: I0309 16:46:09.993921 7604 status_manager.go:851] "Failed to get status for pod" podUID="3a18cac8a90d6913a6a0391d805cddc9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.995052 master-0 kubenswrapper[7604]: I0309 16:46:09.994996 7604 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:46:09.995052 master-0 kubenswrapper[7604]: E0309 16:46:09.995005 7604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 09 16:46:10.111283 master-0 kubenswrapper[7604]: I0309 16:46:10.111215 7604 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:46:10.111626 master-0 kubenswrapper[7604]: E0309 16:46:10.111558 7604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-xtmhw_openshift-ingress-operator(f606b775-bf22-4d64-abb4-8e0e24ddb5cd)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" podUID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" Mar 09 16:46:10.729179 master-0 kubenswrapper[7604]: I0309 16:46:10.729116 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:10.729179 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:10.729179 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:10.729179 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:10.729179 master-0 kubenswrapper[7604]: I0309 16:46:10.729178 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:11.041086 master-0 kubenswrapper[7604]: I0309 16:46:11.038708 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4"} Mar 09 16:46:11.041086 master-0 kubenswrapper[7604]: I0309 16:46:11.038787 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58"} Mar 09 16:46:11.041086 master-0 kubenswrapper[7604]: I0309 16:46:11.038805 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49"} Mar 09 16:46:11.165549 master-0 kubenswrapper[7604]: I0309 16:46:11.165496 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:46:11.305851 master-0 kubenswrapper[7604]: I0309 16:46:11.305668 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 09 16:46:11.306337 master-0 kubenswrapper[7604]: I0309 16:46:11.306269 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:11.306468 master-0 kubenswrapper[7604]: I0309 16:46:11.306439 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:11.306896 master-0 kubenswrapper[7604]: I0309 16:46:11.305782 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 09 16:46:11.306963 master-0 kubenswrapper[7604]: I0309 16:46:11.306908 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 09 16:46:11.306963 master-0 kubenswrapper[7604]: I0309 16:46:11.306934 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 09 16:46:11.307045 master-0 kubenswrapper[7604]: I0309 16:46:11.306975 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 09 16:46:11.307045 master-0 kubenswrapper[7604]: I0309 16:46:11.307007 7604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 09 16:46:11.307841 master-0 kubenswrapper[7604]: I0309 16:46:11.307165 7604 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:11.307841 master-0 kubenswrapper[7604]: I0309 16:46:11.307186 7604 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:11.307841 master-0 kubenswrapper[7604]: I0309 16:46:11.307235 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:11.307841 master-0 kubenswrapper[7604]: I0309 16:46:11.307269 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:11.307841 master-0 kubenswrapper[7604]: I0309 16:46:11.307285 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:11.307841 master-0 kubenswrapper[7604]: I0309 16:46:11.307298 7604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:11.409742 master-0 kubenswrapper[7604]: I0309 16:46:11.409596 7604 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:11.409742 master-0 kubenswrapper[7604]: I0309 16:46:11.409670 7604 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:11.409742 master-0 kubenswrapper[7604]: I0309 16:46:11.409683 7604 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:11.409742 master-0 kubenswrapper[7604]: I0309 16:46:11.409697 7604 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:11.736654 master-0 kubenswrapper[7604]: I0309 16:46:11.736580 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:11.736654 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:11.736654 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:11.736654 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:11.738150 master-0 kubenswrapper[7604]: I0309 16:46:11.738106 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:12.099937 master-0 kubenswrapper[7604]: I0309 16:46:12.099769 7604 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c" exitCode=0 Mar 09 16:46:12.100360 master-0 kubenswrapper[7604]: I0309 16:46:12.100278 7604 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 09 16:46:12.100653 master-0 kubenswrapper[7604]: I0309 16:46:12.100628 7604 scope.go:117] "RemoveContainer" containerID="f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e" Mar 09 16:46:12.121442 master-0 kubenswrapper[7604]: I0309 16:46:12.121110 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9"} Mar 09 16:46:12.121442 master-0 kubenswrapper[7604]: I0309 16:46:12.121178 7604 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137"} Mar 09 16:46:12.123647 master-0 kubenswrapper[7604]: I0309 16:46:12.121857 7604 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:12.135091 master-0 kubenswrapper[7604]: I0309 16:46:12.135038 7604 scope.go:117] "RemoveContainer" containerID="06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c" Mar 09 16:46:12.185518 master-0 kubenswrapper[7604]: I0309 16:46:12.185377 7604 scope.go:117] "RemoveContainer" containerID="41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22" Mar 09 16:46:12.218554 master-0 kubenswrapper[7604]: I0309 16:46:12.218512 7604 scope.go:117] "RemoveContainer" containerID="f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e" Mar 09 16:46:12.219152 master-0 kubenswrapper[7604]: E0309 16:46:12.219119 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e\": container with ID starting with f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e not found: ID does not exist" containerID="f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e" Mar 09 16:46:12.219204 master-0 kubenswrapper[7604]: I0309 16:46:12.219163 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e"} err="failed to get container status \"f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e\": rpc error: code = NotFound desc = could not find container \"f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e\": container with ID starting with f073196274a73fedd50a3deebaa4447623b916147f7202b3fe43a147666bc90e not found: ID does not exist" Mar 09 16:46:12.219204 master-0 kubenswrapper[7604]: I0309 16:46:12.219191 7604 scope.go:117] "RemoveContainer" containerID="06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c" Mar 09 16:46:12.220238 master-0 kubenswrapper[7604]: E0309 16:46:12.220204 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c\": container with ID starting with 06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c not found: ID does not exist" containerID="06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c" Mar 09 16:46:12.220314 master-0 kubenswrapper[7604]: I0309 16:46:12.220237 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c"} err="failed to get container status \"06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c\": rpc error: code = NotFound desc = could not find container \"06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c\": container with ID starting with 06a98184fe555cdb10f3d557227868ce2fd9189fc136dc3b2da58e9a5cb5724c not found: ID does not exist" Mar 09 16:46:12.220314 master-0 kubenswrapper[7604]: I0309 16:46:12.220255 7604 scope.go:117] "RemoveContainer" containerID="41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22" Mar 09 16:46:12.220559 master-0 kubenswrapper[7604]: E0309 16:46:12.220534 7604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22\": container with ID starting with 41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22 not found: ID does not exist" containerID="41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22" Mar 09 16:46:12.220621 master-0 kubenswrapper[7604]: I0309 16:46:12.220561 7604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22"} err="failed to get container status \"41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22\": rpc error: code = NotFound desc = could not find container \"41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22\": container with ID starting with 41941c51c09408cf0581a66f482ed31f66892e9b3261a932bc8849c476354b22 not found: ID does not exist" Mar 09 16:46:12.729161 master-0 kubenswrapper[7604]: I0309 16:46:12.729022 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:12.729161 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:12.729161 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:12.729161 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:12.729161 master-0 kubenswrapper[7604]: I0309 16:46:12.729087 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:13.121643 master-0 kubenswrapper[7604]: I0309 16:46:13.121384 7604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 09 16:46:13.122380 master-0 kubenswrapper[7604]: I0309 16:46:13.121808 7604 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 09 16:46:13.728976 master-0 kubenswrapper[7604]: I0309 16:46:13.728712 7604 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:13.728976 master-0 kubenswrapper[7604]: [-]has-synced failed: reason withheld Mar 09 16:46:13.728976 master-0 kubenswrapper[7604]: [+]process-running ok Mar 09 16:46:13.728976 master-0 kubenswrapper[7604]: healthz check failed Mar 09 16:46:13.728976 master-0 kubenswrapper[7604]: I0309 16:46:13.728798 7604 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:13.734068 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 09 16:46:13.759108 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 09 16:46:13.759378 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 09 16:46:13.760473 master-0 systemd[1]: kubelet.service: Consumed 2min 55.892s CPU time. Mar 09 16:46:13.772346 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 09 16:46:13.945238 master-0 kubenswrapper[32968]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:46:13.945238 master-0 kubenswrapper[32968]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 09 16:46:13.945238 master-0 kubenswrapper[32968]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:46:13.945238 master-0 kubenswrapper[32968]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:46:13.945238 master-0 kubenswrapper[32968]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 09 16:46:13.945238 master-0 kubenswrapper[32968]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 09 16:46:13.946284 master-0 kubenswrapper[32968]: I0309 16:46:13.945338 32968 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947863 32968 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947886 32968 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947892 32968 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947897 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947903 32968 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947909 32968 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947914 32968 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:46:13.947906 master-0 kubenswrapper[32968]: W0309 16:46:13.947920 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947926 32968 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947931 32968 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947935 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947943 32968 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947947 32968 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947952 32968 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947956 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947960 32968 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947963 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947967 32968 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947971 32968 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947976 32968 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947980 32968 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947983 32968 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947987 32968 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947991 32968 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.947998 32968 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.948002 32968 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.948006 32968 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:46:13.948464 master-0 kubenswrapper[32968]: W0309 16:46:13.948010 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948014 32968 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948018 32968 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948022 32968 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948026 32968 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948030 32968 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948035 32968 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948039 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948044 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948048 32968 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948054 32968 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948058 32968 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948062 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948066 32968 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948070 32968 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948076 32968 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948080 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948083 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948087 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948091 32968 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:46:13.948998 master-0 kubenswrapper[32968]: W0309 16:46:13.948095 32968 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948101 32968 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948105 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948110 32968 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948114 32968 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948118 32968 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948122 32968 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948126 32968 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948130 32968 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948134 32968 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948138 32968 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948141 32968 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948148 32968 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948152 32968 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948155 32968 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948159 32968 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948163 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948167 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948171 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948175 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:46:13.949567 master-0 kubenswrapper[32968]: W0309 16:46:13.948179 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: W0309 16:46:13.948182 32968 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: W0309 16:46:13.948186 32968 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: W0309 16:46:13.948192 32968 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: W0309 16:46:13.948198 32968 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948329 32968 flags.go:64] FLAG: --address="0.0.0.0" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948339 32968 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948346 32968 flags.go:64] FLAG: --anonymous-auth="true" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948352 32968 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948362 32968 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948367 32968 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948374 32968 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948379 32968 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948384 32968 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948388 32968 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948393 32968 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948398 32968 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948405 32968 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948410 32968 flags.go:64] FLAG: --cgroup-root="" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948414 32968 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948438 32968 flags.go:64] FLAG: --client-ca-file="" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948443 32968 flags.go:64] FLAG: --cloud-config="" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948448 32968 flags.go:64] FLAG: --cloud-provider="" Mar 09 16:46:13.950182 master-0 kubenswrapper[32968]: I0309 16:46:13.948452 32968 flags.go:64] FLAG: --cluster-dns="[]" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948460 32968 flags.go:64] FLAG: --cluster-domain="" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948465 32968 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948474 32968 flags.go:64] FLAG: --config-dir="" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948480 32968 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948486 32968 flags.go:64] FLAG: --container-log-max-files="5" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948494 32968 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948499 32968 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948505 32968 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948510 32968 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948516 32968 flags.go:64] FLAG: --contention-profiling="false" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948523 32968 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948528 32968 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948533 32968 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948537 32968 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948543 32968 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948548 32968 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948560 32968 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948565 32968 flags.go:64] FLAG: --enable-load-reader="false" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948572 32968 flags.go:64] FLAG: --enable-server="true" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948577 32968 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948584 32968 flags.go:64] FLAG: --event-burst="100" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948589 32968 flags.go:64] FLAG: --event-qps="50" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948593 32968 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948598 32968 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 09 16:46:13.951086 master-0 kubenswrapper[32968]: I0309 16:46:13.948602 32968 flags.go:64] FLAG: --eviction-hard="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948626 32968 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948631 32968 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948638 32968 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948643 32968 flags.go:64] FLAG: --eviction-soft="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948648 32968 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948652 32968 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948657 32968 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948661 32968 flags.go:64] FLAG: --experimental-mounter-path="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948666 32968 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948679 32968 flags.go:64] FLAG: --fail-swap-on="true" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948686 32968 flags.go:64] FLAG: --feature-gates="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948692 32968 flags.go:64] FLAG: --file-check-frequency="20s" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948696 32968 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948703 32968 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948708 32968 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948713 32968 flags.go:64] FLAG: --healthz-port="10248" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948717 32968 flags.go:64] FLAG: --help="false" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948722 32968 flags.go:64] FLAG: --hostname-override="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948726 32968 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948734 32968 flags.go:64] FLAG: --http-check-frequency="20s" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948739 32968 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948743 32968 flags.go:64] FLAG: --image-credential-provider-config="" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948747 32968 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948752 32968 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 09 16:46:13.951925 master-0 kubenswrapper[32968]: I0309 16:46:13.948756 32968 flags.go:64] FLAG: --image-service-endpoint="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948761 32968 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948765 32968 flags.go:64] FLAG: --kube-api-burst="100" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948773 32968 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948781 32968 flags.go:64] FLAG: --kube-api-qps="50" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948786 32968 flags.go:64] FLAG: --kube-reserved="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948790 32968 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948795 32968 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948800 32968 flags.go:64] FLAG: --kubelet-cgroups="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948804 32968 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948808 32968 flags.go:64] FLAG: --lock-file="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948813 32968 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948820 32968 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948825 32968 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948831 32968 flags.go:64] FLAG: --log-json-split-stream="false" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948835 32968 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948840 32968 flags.go:64] FLAG: --log-text-split-stream="false" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948847 32968 flags.go:64] FLAG: --logging-format="text" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948851 32968 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948856 32968 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948861 32968 flags.go:64] FLAG: --manifest-url="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948870 32968 flags.go:64] FLAG: --manifest-url-header="" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948877 32968 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948882 32968 flags.go:64] FLAG: --max-open-files="1000000" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948887 32968 flags.go:64] FLAG: --max-pods="110" Mar 09 16:46:13.952648 master-0 kubenswrapper[32968]: I0309 16:46:13.948892 32968 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948897 32968 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948902 32968 flags.go:64] FLAG: --memory-manager-policy="None" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948906 32968 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948914 32968 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948918 32968 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948923 32968 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948940 32968 flags.go:64] FLAG: --node-status-max-images="50" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948946 32968 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948952 32968 flags.go:64] FLAG: --oom-score-adj="-999" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948963 32968 flags.go:64] FLAG: --pod-cidr="" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948968 32968 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948978 32968 flags.go:64] FLAG: --pod-manifest-path="" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948983 32968 flags.go:64] FLAG: --pod-max-pids="-1" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948989 32968 flags.go:64] FLAG: --pods-per-core="0" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948993 32968 flags.go:64] FLAG: --port="10250" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.948998 32968 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949003 32968 flags.go:64] FLAG: --provider-id="" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949009 32968 flags.go:64] FLAG: --qos-reserved="" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949017 32968 flags.go:64] FLAG: --read-only-port="10255" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949022 32968 flags.go:64] FLAG: --register-node="true" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949027 32968 flags.go:64] FLAG: --register-schedulable="true" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949032 32968 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 09 16:46:13.953367 master-0 kubenswrapper[32968]: I0309 16:46:13.949041 32968 flags.go:64] FLAG: --registry-burst="10" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949046 32968 flags.go:64] FLAG: --registry-qps="5" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949052 32968 flags.go:64] FLAG: --reserved-cpus="" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949061 32968 flags.go:64] FLAG: --reserved-memory="" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949068 32968 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949075 32968 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949080 32968 flags.go:64] FLAG: --rotate-certificates="false" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949085 32968 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949090 32968 flags.go:64] FLAG: --runonce="false" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949095 32968 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949100 32968 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949104 32968 flags.go:64] FLAG: --seccomp-default="false" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949112 32968 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949117 32968 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949122 32968 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949128 32968 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949133 32968 flags.go:64] FLAG: --storage-driver-password="root" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949161 32968 flags.go:64] FLAG: --storage-driver-secure="false" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949169 32968 flags.go:64] FLAG: --storage-driver-table="stats" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949174 32968 flags.go:64] FLAG: --storage-driver-user="root" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949195 32968 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949241 32968 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949247 32968 flags.go:64] FLAG: --system-cgroups="" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949253 32968 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949272 32968 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 09 16:46:13.954104 master-0 kubenswrapper[32968]: I0309 16:46:13.949278 32968 flags.go:64] FLAG: --tls-cert-file="" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949283 32968 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949300 32968 flags.go:64] FLAG: --tls-min-version="" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949304 32968 flags.go:64] FLAG: --tls-private-key-file="" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949309 32968 flags.go:64] FLAG: --topology-manager-policy="none" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949314 32968 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949322 32968 flags.go:64] FLAG: --topology-manager-scope="container" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949328 32968 flags.go:64] FLAG: --v="2" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949345 32968 flags.go:64] FLAG: --version="false" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949353 32968 flags.go:64] FLAG: --vmodule="" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949362 32968 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: I0309 16:46:13.949368 32968 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949840 32968 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949851 32968 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949856 32968 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949860 32968 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949865 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949869 32968 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949873 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949877 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949881 32968 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949885 32968 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:46:13.954839 master-0 kubenswrapper[32968]: W0309 16:46:13.949889 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949893 32968 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949898 32968 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949903 32968 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949907 32968 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949912 32968 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949917 32968 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949921 32968 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949926 32968 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949930 32968 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949934 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949938 32968 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949942 32968 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949945 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949949 32968 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949953 32968 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949956 32968 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949960 32968 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949964 32968 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:46:13.955907 master-0 kubenswrapper[32968]: W0309 16:46:13.949968 32968 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949973 32968 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949978 32968 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949982 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949986 32968 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949990 32968 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949994 32968 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.949997 32968 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950001 32968 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950005 32968 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950008 32968 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950012 32968 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950016 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950020 32968 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950024 32968 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950027 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950031 32968 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950035 32968 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950046 32968 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950052 32968 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:46:13.956651 master-0 kubenswrapper[32968]: W0309 16:46:13.950056 32968 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950059 32968 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950063 32968 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950067 32968 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950071 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950074 32968 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950078 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950082 32968 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950085 32968 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950089 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950092 32968 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950096 32968 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950101 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950104 32968 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950108 32968 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950111 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950115 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950119 32968 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950123 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950127 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:46:13.957354 master-0 kubenswrapper[32968]: W0309 16:46:13.950131 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.950134 32968 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.950138 32968 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: I0309 16:46:13.950152 32968 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: I0309 16:46:13.955607 32968 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: I0309 16:46:13.955630 32968 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955739 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955746 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955751 32968 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955755 32968 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955759 32968 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955764 32968 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955768 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955772 32968 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955776 32968 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:46:13.958019 master-0 kubenswrapper[32968]: W0309 16:46:13.955781 32968 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955787 32968 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955791 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955796 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955800 32968 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955804 32968 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955808 32968 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955812 32968 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955816 32968 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955821 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955825 32968 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955830 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955835 32968 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955840 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955845 32968 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955849 32968 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955853 32968 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955857 32968 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955861 32968 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:46:13.958453 master-0 kubenswrapper[32968]: W0309 16:46:13.955867 32968 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955872 32968 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955877 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955883 32968 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955887 32968 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955891 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955895 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955899 32968 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955903 32968 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955908 32968 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955912 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955917 32968 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955922 32968 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955928 32968 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955932 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955937 32968 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955942 32968 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955947 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955952 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955956 32968 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:46:13.958983 master-0 kubenswrapper[32968]: W0309 16:46:13.955960 32968 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.955976 32968 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.955981 32968 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.955985 32968 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.955990 32968 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.955994 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.955998 32968 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956002 32968 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956006 32968 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956010 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956014 32968 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956018 32968 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956022 32968 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956026 32968 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956029 32968 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956033 32968 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956037 32968 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956041 32968 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956045 32968 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956050 32968 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:46:13.959909 master-0 kubenswrapper[32968]: W0309 16:46:13.956055 32968 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956060 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956064 32968 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956067 32968 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: I0309 16:46:13.956073 32968 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956194 32968 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956200 32968 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956205 32968 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956210 32968 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956214 32968 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956218 32968 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956222 32968 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956226 32968 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956230 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956235 32968 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956239 32968 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 09 16:46:13.960670 master-0 kubenswrapper[32968]: W0309 16:46:13.956242 32968 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956246 32968 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956250 32968 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956254 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956257 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956261 32968 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956265 32968 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956268 32968 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956272 32968 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956276 32968 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956279 32968 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956283 32968 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956287 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956291 32968 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956296 32968 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956301 32968 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956305 32968 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956310 32968 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 09 16:46:13.961196 master-0 kubenswrapper[32968]: W0309 16:46:13.956314 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956319 32968 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956323 32968 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956327 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956331 32968 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956338 32968 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956342 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956346 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956350 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956354 32968 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956358 32968 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956362 32968 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956366 32968 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956370 32968 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956374 32968 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956378 32968 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956383 32968 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956388 32968 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956392 32968 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 09 16:46:13.961763 master-0 kubenswrapper[32968]: W0309 16:46:13.956396 32968 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956400 32968 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956404 32968 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956408 32968 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956411 32968 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956415 32968 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956435 32968 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956439 32968 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956443 32968 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956447 32968 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956451 32968 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956456 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956459 32968 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956464 32968 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956468 32968 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956472 32968 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956476 32968 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956480 32968 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956486 32968 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956491 32968 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 09 16:46:13.962861 master-0 kubenswrapper[32968]: W0309 16:46:13.956495 32968 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: W0309 16:46:13.956499 32968 feature_gate.go:330] unrecognized feature gate: Example Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: W0309 16:46:13.956504 32968 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: W0309 16:46:13.956508 32968 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.956514 32968 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.956650 32968 server.go:940] "Client rotation is on, will bootstrap in background" Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.958242 32968 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.958319 32968 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.958551 32968 server.go:997] "Starting client certificate rotation" Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.958563 32968 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.958739 32968 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 10:22:49.601667554 +0000 UTC Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.958807 32968 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h36m35.642862331s for next certificate rotation Mar 09 16:46:13.963840 master-0 kubenswrapper[32968]: I0309 16:46:13.959170 32968 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:46:13.964226 master-0 kubenswrapper[32968]: I0309 16:46:13.960403 32968 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 09 16:46:13.964226 master-0 kubenswrapper[32968]: I0309 16:46:13.963013 32968 log.go:25] "Validated CRI v1 runtime API" Mar 09 16:46:13.967288 master-0 kubenswrapper[32968]: I0309 16:46:13.967246 32968 log.go:25] "Validated CRI v1 image API" Mar 09 16:46:13.968556 master-0 kubenswrapper[32968]: I0309 16:46:13.968515 32968 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 09 16:46:13.979718 master-0 kubenswrapper[32968]: I0309 16:46:13.979400 32968 fs.go:135] Filesystem UUIDs: map[4d92f182-6acb-4a41-8103-6903266f66d5:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 09 16:46:13.980596 master-0 kubenswrapper[32968]: I0309 16:46:13.979486 32968 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344/userdata/shm major:0 minor:522 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0f0a39d805a27ae6402fcdfc0601eab19733f53f21a52d2a798a59ad90607729/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0f0a39d805a27ae6402fcdfc0601eab19733f53f21a52d2a798a59ad90607729/userdata/shm major:0 minor:319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16090dface4ebfac4ce59503c1b97e63c47315ed98b676af9cb614a7646af5db/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16090dface4ebfac4ce59503c1b97e63c47315ed98b676af9cb614a7646af5db/userdata/shm major:0 minor:757 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea/userdata/shm major:0 minor:722 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d2a3afb8eb1e0a8c25b36f8e7877fb572cd427c87f5ea499b36180c2a18273c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d2a3afb8eb1e0a8c25b36f8e7877fb572cd427c87f5ea499b36180c2a18273c/userdata/shm major:0 minor:475 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef/userdata/shm major:0 minor:823 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29f3efce623abd11180f220d3e9cf221f9f6cf57527de2211126a65b38f4186b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29f3efce623abd11180f220d3e9cf221f9f6cf57527de2211126a65b38f4186b/userdata/shm major:0 minor:763 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6/userdata/shm major:0 minor:1073 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460/userdata/shm major:0 minor:750 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c/userdata/shm major:0 minor:80 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546/userdata/shm major:0 minor:825 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c/userdata/shm major:0 minor:820 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec/userdata/shm major:0 minor:138 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48ea3b1c1a43df7f7909a26935d767da157bf2e1b5a1c65e482d9227e70712b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48ea3b1c1a43df7f7909a26935d767da157bf2e1b5a1c65e482d9227e70712b8/userdata/shm major:0 minor:1182 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04/userdata/shm major:0 minor:1005 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703/userdata/shm major:0 minor:802 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134/userdata/shm major:0 minor:313 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca/userdata/shm major:0 minor:523 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e/userdata/shm major:0 minor:761 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0/userdata/shm major:0 minor:214 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5534d85f0a9fe740eb26ccac2e47ce52d44e3f557fa5be108af8630168b4e7ab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5534d85f0a9fe740eb26ccac2e47ce52d44e3f557fa5be108af8630168b4e7ab/userdata/shm major:0 minor:759 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/57aaf330726fe627a8a61909fad0b332f97b99d8101a20fb9a743ae449fbfca5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/57aaf330726fe627a8a61909fad0b332f97b99d8101a20fb9a743ae449fbfca5/userdata/shm major:0 minor:1088 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30/userdata/shm major:0 minor:426 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559/userdata/shm major:0 minor:1048 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98/userdata/shm major:0 minor:797 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6/userdata/shm major:0 minor:516 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b/userdata/shm major:0 minor:429 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6bef13556b054eeec06112dd3efb63b9b2d0c3aa5b54369f3f112afc33fa6fa0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6bef13556b054eeec06112dd3efb63b9b2d0c3aa5b54369f3f112afc33fa6fa0/userdata/shm major:0 minor:737 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02/userdata/shm major:0 minor:525 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/70eddae976602b0fd7a417da85764552e2ce702063285733d01e52d020ee14c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/70eddae976602b0fd7a417da85764552e2ce702063285733d01e52d020ee14c3/userdata/shm major:0 minor:1189 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d/userdata/shm major:0 minor:511 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b/userdata/shm major:0 minor:612 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae/userdata/shm major:0 minor:1161 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d/userdata/shm major:0 minor:727 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d/userdata/shm major:0 minor:506 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91158bad31d126f335930945d685253a8862c41cc0ef9e00a780fb2229ca874e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91158bad31d126f335930945d685253a8862c41cc0ef9e00a780fb2229ca874e/userdata/shm major:0 minor:821 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9851e44d22a4912195681afea0e67c8f9b72db3658de58af22ee3dada2512884/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9851e44d22a4912195681afea0e67c8f9b72db3658de58af22ee3dada2512884/userdata/shm major:0 minor:405 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47/userdata/shm major:0 minor:611 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9dc2251ac339285f7e616265d59b743eecae28fcec97875a6787ff662520db27/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9dc2251ac339285f7e616265d59b743eecae28fcec97875a6787ff662520db27/userdata/shm major:0 minor:994 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762/userdata/shm major:0 minor:362 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9b628cdb80b26fca66723feadbd65d1a0479ac8b305d4bb2d0a1150e9146e96/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9b628cdb80b26fca66723feadbd65d1a0479ac8b305d4bb2d0a1150e9146e96/userdata/shm major:0 minor:753 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b0a3a4ee0305c897e72b7253be6cebaee1b1c6c54eed95437052e11964c648c2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b0a3a4ee0305c897e72b7253be6cebaee1b1c6c54eed95437052e11964c648c2/userdata/shm major:0 minor:756 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b43b8b247bcf7dd91e3dade29e3c0373e4989b5f279bccec521a6e0e7ca4f4e0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b43b8b247bcf7dd91e3dade29e3c0373e4989b5f279bccec521a6e0e7ca4f4e0/userdata/shm major:0 minor:69 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e/userdata/shm major:0 minor:487 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217/userdata/shm major:0 minor:824 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c400ace13e0290ea978d90a75cda129235df657b46ef5808d10268996d05129a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c400ace13e0290ea978d90a75cda129235df657b46ef5808d10268996d05129a/userdata/shm major:0 minor:1007 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca6c68bebab4667be94ed8d4950c8443a1dc101549e30dea2fc49d8db92f1da8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca6c68bebab4667be94ed8d4950c8443a1dc101549e30dea2fc49d8db92f1da8/userdata/shm major:0 minor:1153 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029/userdata/shm major:0 minor:610 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7bb1ade7135b46fd5c4d6dd8420520ed7e496d3520bdd197b24cd39361e4974/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7bb1ade7135b46fd5c4d6dd8420520ed7e496d3520bdd197b24cd39361e4974/userdata/shm major:0 minor:385 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70/userdata/shm major:0 minor:102 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf/userdata/shm major:0 minor:314 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e68e729dc16b7303d9fa69af7f0d39f2249d9f66e6c9ceb43ec2254fd7af17fe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e68e729dc16b7303d9fa69af7f0d39f2249d9f66e6c9ceb43ec2254fd7af17fe/userdata/shm major:0 minor:1025 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eb70b637ebcdf20545438ca3a9998bdd103e60d200280f4b769a5fd812b5a907/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eb70b637ebcdf20545438ca3a9998bdd103e60d200280f4b769a5fd812b5a907/userdata/shm major:0 minor:819 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5/userdata/shm major:0 minor:502 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb4bd8cef53e72d659379e281e583b2e2ff3d1ae2b420acbf269067cfbc2882a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb4bd8cef53e72d659379e281e583b2e2ff3d1ae2b420acbf269067cfbc2882a/userdata/shm major:0 minor:1071 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a/userdata/shm major:0 minor:96 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd39f0db4c8cb49b906ba36723dbeb15b7ced8a9a0505c21a799794cabf48a9c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd39f0db4c8cb49b906ba36723dbeb15b7ced8a9a0505c21a799794cabf48a9c/userdata/shm major:0 minor:826 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808/userdata/shm major:0 minor:692 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~projected/kube-api-access-hkrlr:{mountpoint:/var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~projected/kube-api-access-hkrlr major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:471 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d/volumes/kubernetes.io~projected/kube-api-access-wn8hj:{mountpoint:/var/lib/kubelet/pods/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d/volumes/kubernetes.io~projected/kube-api-access-wn8hj major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~projected/kube-api-access-xkjv9:{mountpoint:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~projected/kube-api-access-xkjv9 major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18f0164f-0875-4668-b155-df69e05e8ae0/volumes/kubernetes.io~projected/kube-api-access-pq2bk:{mountpoint:/var/lib/kubelet/pods/18f0164f-0875-4668-b155-df69e05e8ae0/volumes/kubernetes.io~projected/kube-api-access-pq2bk major:0 minor:646 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18f0164f-0875-4668-b155-df69e05e8ae0/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/18f0164f-0875-4668-b155-df69e05e8ae0/volumes/kubernetes.io~secret/cert major:0 minor:647 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ba020e0-1728-4e56-9618-d0ec3d9126eb/volumes/kubernetes.io~projected/kube-api-access-tnw68:{mountpoint:/var/lib/kubelet/pods/1ba020e0-1728-4e56-9618-d0ec3d9126eb/volumes/kubernetes.io~projected/kube-api-access-tnw68 major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1da6f189-535a-4bf1-bbdb-758327651ae2/volumes/kubernetes.io~projected/kube-api-access-xgl27:{mountpoint:/var/lib/kubelet/pods/1da6f189-535a-4bf1-bbdb-758327651ae2/volumes/kubernetes.io~projected/kube-api-access-xgl27 major:0 minor:817 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~projected/kube-api-access-4zxck:{mountpoint:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~projected/kube-api-access-4zxck major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~projected/kube-api-access-kc2t2:{mountpoint:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~projected/kube-api-access-kc2t2 major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/encryption-config major:0 minor:477 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/etcd-client major:0 minor:479 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/serving-cert major:0 minor:478 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/kube-api-access-psgk6:{mountpoint:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/kube-api-access-psgk6 major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:605 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~projected/kube-api-access major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34c0b60e-da69-452d-858d-0af77f18946d/volumes/kubernetes.io~projected/kube-api-access-vmdb8:{mountpoint:/var/lib/kubelet/pods/34c0b60e-da69-452d-858d-0af77f18946d/volumes/kubernetes.io~projected/kube-api-access-vmdb8 major:0 minor:735 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34c0b60e-da69-452d-858d-0af77f18946d/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/34c0b60e-da69-452d-858d-0af77f18946d/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/357570a4-f69b-4970-9b6f-fe06fc4c2f90/volumes/kubernetes.io~projected/kube-api-access-495rn:{mountpoint:/var/lib/kubelet/pods/357570a4-f69b-4970-9b6f-fe06fc4c2f90/volumes/kubernetes.io~projected/kube-api-access-495rn major:0 minor:738 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/357570a4-f69b-4970-9b6f-fe06fc4c2f90/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/357570a4-f69b-4970-9b6f-fe06fc4c2f90/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3745c679-2ea9-4382-9270-4d3fbbaaf296/volumes/kubernetes.io~projected/kube-api-access-jgj24:{mountpoint:/var/lib/kubelet/pods/3745c679-2ea9-4382-9270-4d3fbbaaf296/volumes/kubernetes.io~projected/kube-api-access-jgj24 major:0 minor:724 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~projected/kube-api-access-j244n:{mountpoint:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~projected/kube-api-access-j244n major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~secret/serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ec3050d-8e6f-466a-995a-f78270408a85/volumes/kubernetes.io~projected/kube-api-access-qsbkx:{mountpoint:/var/lib/kubelet/pods/3ec3050d-8e6f-466a-995a-f78270408a85/volumes/kubernetes.io~projected/kube-api-access-qsbkx major:0 minor:846 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ec3050d-8e6f-466a-995a-f78270408a85/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/3ec3050d-8e6f-466a-995a-f78270408a85/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~projected/kube-api-access-497s5:{mountpoint:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~projected/kube-api-access-497s5 major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~projected/ca-certs major:0 minor:414 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~projected/kube-api-access-p8rjs:{mountpoint:/var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~projected/kube-api-access-p8rjs major:0 minor:428 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:470 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~projected/kube-api-access-rrt7m:{mountpoint:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~projected/kube-api-access-rrt7m major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5587e967-124e-4f2a-b7fb-42cb16bfc337/volumes/kubernetes.io~projected/kube-api-access-4dzfq:{mountpoint:/var/lib/kubelet/pods/5587e967-124e-4f2a-b7fb-42cb16bfc337/volumes/kubernetes.io~projected/kube-api-access-4dzfq major:0 minor:711 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5587e967-124e-4f2a-b7fb-42cb16bfc337/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/5587e967-124e-4f2a-b7fb-42cb16bfc337/volumes/kubernetes.io~secret/metrics-tls major:0 minor:809 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57036838-9f42-4ea1-a5c9-77f820cc22c9/volumes/kubernetes.io~projected/kube-api-access-czkqg:{mountpoint:/var/lib/kubelet/pods/57036838-9f42-4ea1-a5c9-77f820cc22c9/volumes/kubernetes.io~projected/kube-api-access-czkqg major:0 minor:400 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~projected/kube-api-access-782hr:{mountpoint:/var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~projected/kube-api-access-782hr major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~projected/kube-api-access-kvh62:{mountpoint:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~projected/kube-api-access-kvh62 major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~secret/webhook-cert major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/631f2bdf-2ed4-4315-98c3-c5a538d0aec3/volumes/kubernetes.io~projected/kube-api-access-shpfl:{mountpoint:/var/lib/kubelet/pods/631f2bdf-2ed4-4315-98c3-c5a538d0aec3/volumes/kubernetes.io~projected/kube-api-access-shpfl major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/631f2bdf-2ed4-4315-98c3-c5a538d0aec3/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/631f2bdf-2ed4-4315-98c3-c5a538d0aec3/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/696fcca2-df1a-491d-956d-1cfda1ee5e70/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/696fcca2-df1a-491d-956d-1cfda1ee5e70/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~projected/kube-api-access-bdmsj:{mountpoint:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~projected/kube-api-access-bdmsj major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/etcd-client major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~projected/kube-api-access-98llp:{mountpoint:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~projected/kube-api-access-98llp major:0 minor:137 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:136 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/709aad35-08ca-4ff5-abe5-e1558c8dc83f/volumes/kubernetes.io~projected/kube-api-access-579rp:{mountpoint:/var/lib/kubelet/pods/709aad35-08ca-4ff5-abe5-e1558c8dc83f/volumes/kubernetes.io~projected/kube-api-access-579rp major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~projected/kube-api-access-4p2nd:{mountpoint:/var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~projected/kube-api-access-4p2nd major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~secret/metrics-tls major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~projected/kube-api-access-dxlnq:{mountpoint:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~projected/kube-api-access-dxlnq major:0 minor:1003 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/default-certificate major:0 minor:1001 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/metrics-certs major:0 minor:999 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/stats-auth major:0 minor:1002 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7937ccab-a6fb-4401-a4fd-7a2a91a7193f/volumes/kubernetes.io~projected/kube-api-access-cm4ff:{mountpoint:/var/lib/kubelet/pods/7937ccab-a6fb-4401-a4fd-7a2a91a7193f/volumes/kubernetes.io~projected/kube-api-access-cm4ff major:0 minor:308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~projected/kube-api-access-rvkfn:{mountpoint:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~projected/kube-api-access-rvkfn major:0 minor:1160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1158 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1152 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes/kubernetes.io~projected/kube-api-access-rl5cz:{mountpoint:/var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes/kubernetes.io~projected/kube-api-access-rl5cz major:0 minor:518 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes/kubernetes.io~secret/serving-cert major:0 minor:481 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~projected/kube-api-access-rrms4:{mountpoint:/var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~projected/kube-api-access-rrms4 major:0 minor:1024 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~secret/certs major:0 minor:1023 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1018 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes/kubernetes.io~projected/kube-api-access-hw4zf:{mountpoint:/var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes/kubernetes.io~projected/kube-api-access-hw4zf major:0 minor:517 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes/kubernetes.io~secret/serving-cert major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~projected/kube-api-access-6n2qw:{mountpoint:/var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~projected/kube-api-access-6n2qw major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:800 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8972b380-8f87-4b73-8f95-440d34d03884/volumes/kubernetes.io~projected/kube-api-access-8hwnd:{mountpoint:/var/lib/kubelet/pods/8972b380-8f87-4b73-8f95-440d34d03884/volumes/kubernetes.io~projected/kube-api-access-8hwnd major:0 minor:742 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8972b380-8f87-4b73-8f95-440d34d03884/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/8972b380-8f87-4b73-8f95-440d34d03884/volumes/kubernetes.io~secret/proxy-tls major:0 minor:739 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8be2517a-6f28-4289-a108-6e3345a1e246/volumes/kubernetes.io~projected/kube-api-access-hh9fx:{mountpoint:/var/lib/kubelet/pods/8be2517a-6f28-4289-a108-6e3345a1e246/volumes/kubernetes.io~projected/kube-api-access-hh9fx major:0 minor:744 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8be2517a-6f28-4289-a108-6e3345a1e246/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8be2517a-6f28-4289-a108-6e3345a1e246/volumes/kubernetes.io~secret/serving-cert major:0 minor:741 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~projected/kube-api-access-nl5kt:{mountpoint:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~projected/kube-api-access-nl5kt major:0 minor:461 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/encryption-config major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/etcd-client major:0 minor:407 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/serving-cert major:0 minor:510 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/volumes/kubernetes.io~projected/kube-api-access-4gkxg:{mountpoint:/var/lib/kubelet/pods/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/volumes/kubernetes.io~projected/kube-api-access-4gkxg major:0 minor:794 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/volumes/kubernetes.io~secret/cert major:0 minor:790 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~projected/kube-api-access-dv8rh:{mountpoint:/var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~projected/kube-api-access-dv8rh major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1065 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1064 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9482fb93-c223-45ee-bde8-7667303270b6/volumes/kubernetes.io~projected/kube-api-access-qjf4p:{mountpoint:/var/lib/kubelet/pods/9482fb93-c223-45ee-bde8-766730 Mar 09 16:46:13.980920 master-0 kubenswrapper[32968]: 3270b6/volumes/kubernetes.io~projected/kube-api-access-qjf4p major:0 minor:1004 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~projected/kube-api-access-fv95c:{mountpoint:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~projected/kube-api-access-fv95c major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a320d845-3a5d-4027-a765-f0b2dc07f9de/volumes/kubernetes.io~projected/kube-api-access-868cs:{mountpoint:/var/lib/kubelet/pods/a320d845-3a5d-4027-a765-f0b2dc07f9de/volumes/kubernetes.io~projected/kube-api-access-868cs major:0 minor:795 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a320d845-3a5d-4027-a765-f0b2dc07f9de/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/a320d845-3a5d-4027-a765-f0b2dc07f9de/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a62ba179-443d-424f-8cff-c75677e8cd5c/volumes/kubernetes.io~projected/kube-api-access-z242f:{mountpoint:/var/lib/kubelet/pods/a62ba179-443d-424f-8cff-c75677e8cd5c/volumes/kubernetes.io~projected/kube-api-access-z242f major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6cd9347-eec9-4549-9de4-6033112634ce/volumes/kubernetes.io~projected/kube-api-access-lcvbf:{mountpoint:/var/lib/kubelet/pods/a6cd9347-eec9-4549-9de4-6033112634ce/volumes/kubernetes.io~projected/kube-api-access-lcvbf major:0 minor:796 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6cd9347-eec9-4549-9de4-6033112634ce/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/a6cd9347-eec9-4549-9de4-6033112634ce/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aec186fc-aead-47fb-a7e1-8c9325897c47/volumes/kubernetes.io~projected/kube-api-access-vj9cq:{mountpoint:/var/lib/kubelet/pods/aec186fc-aead-47fb-a7e1-8c9325897c47/volumes/kubernetes.io~projected/kube-api-access-vj9cq major:0 minor:715 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af4aa8d4-09e1-4589-b7bf-885617a11337/volumes/kubernetes.io~projected/kube-api-access-whqdm:{mountpoint:/var/lib/kubelet/pods/af4aa8d4-09e1-4589-b7bf-885617a11337/volumes/kubernetes.io~projected/kube-api-access-whqdm major:0 minor:445 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af4aa8d4-09e1-4589-b7bf-885617a11337/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/af4aa8d4-09e1-4589-b7bf-885617a11337/volumes/kubernetes.io~secret/signing-key major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~projected/kube-api-access-xnstc:{mountpoint:/var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~projected/kube-api-access-xnstc major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1075 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/baf704e3-daf2-4934-a04e-d31df8df0c4a/volumes/kubernetes.io~projected/kube-api-access-nhglf:{mountpoint:/var/lib/kubelet/pods/baf704e3-daf2-4934-a04e-d31df8df0c4a/volumes/kubernetes.io~projected/kube-api-access-nhglf major:0 minor:815 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/baf704e3-daf2-4934-a04e-d31df8df0c4a/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/baf704e3-daf2-4934-a04e-d31df8df0c4a/volumes/kubernetes.io~secret/proxy-tls major:0 minor:814 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be856881-2ceb-4803-8330-4a27ad8b1937/volumes/kubernetes.io~projected/kube-api-access-v98bk:{mountpoint:/var/lib/kubelet/pods/be856881-2ceb-4803-8330-4a27ad8b1937/volumes/kubernetes.io~projected/kube-api-access-v98bk major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~projected/kube-api-access-pr46z:{mountpoint:/var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~projected/kube-api-access-pr46z major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~secret/srv-cert major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c72e89f0-37ad-4515-89ba-ba1f52ba61f0/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/c72e89f0-37ad-4515-89ba-ba1f52ba61f0/volumes/kubernetes.io~projected/ca-certs major:0 minor:424 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c72e89f0-37ad-4515-89ba-ba1f52ba61f0/volumes/kubernetes.io~projected/kube-api-access-h8jvl:{mountpoint:/var/lib/kubelet/pods/c72e89f0-37ad-4515-89ba-ba1f52ba61f0/volumes/kubernetes.io~projected/kube-api-access-h8jvl major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:541 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~empty-dir/tmp major:0 minor:542 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~projected/kube-api-access-2mr7t:{mountpoint:/var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~projected/kube-api-access-2mr7t major:0 minor:550 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~projected/kube-api-access-vqcqb:{mountpoint:/var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~projected/kube-api-access-vqcqb major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~secret/srv-cert major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~projected/kube-api-access-nl7dv:{mountpoint:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~projected/kube-api-access-nl7dv major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~secret/serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6b4992e-50f3-473c-aa83-ed35569ba307/volumes/kubernetes.io~projected/kube-api-access-bhzzg:{mountpoint:/var/lib/kubelet/pods/d6b4992e-50f3-473c-aa83-ed35569ba307/volumes/kubernetes.io~projected/kube-api-access-bhzzg major:0 minor:743 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6b4992e-50f3-473c-aa83-ed35569ba307/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d6b4992e-50f3-473c-aa83-ed35569ba307/volumes/kubernetes.io~secret/proxy-tls major:0 minor:740 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~projected/kube-api-access-sst4g:{mountpoint:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~projected/kube-api-access-sst4g major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:501 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:500 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/df2ec8b2-02d7-40c4-ac20-32615d689697/volumes/kubernetes.io~projected/kube-api-access-rfj7p:{mountpoint:/var/lib/kubelet/pods/df2ec8b2-02d7-40c4-ac20-32615d689697/volumes/kubernetes.io~projected/kube-api-access-rfj7p major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~projected/kube-api-access-ctsqs:{mountpoint:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~projected/kube-api-access-ctsqs major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e346cb5b-411d-4014-a8d0-590d8deee8ac/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/e346cb5b-411d-4014-a8d0-590d8deee8ac/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1000 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~projected/kube-api-access-whqvw:{mountpoint:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~projected/kube-api-access-whqvw major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~projected/kube-api-access-cvfgw:{mountpoint:/var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~projected/kube-api-access-cvfgw major:0 minor:1047 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1042 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1046 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e91a0e23-c95b-4290-9c0c-29101febfc8f/volumes/kubernetes.io~projected/kube-api-access-26xps:{mountpoint:/var/lib/kubelet/pods/e91a0e23-c95b-4290-9c0c-29101febfc8f/volumes/kubernetes.io~projected/kube-api-access-26xps major:0 minor:1188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e91a0e23-c95b-4290-9c0c-29101febfc8f/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/e91a0e23-c95b-4290-9c0c-29101febfc8f/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea34ff7e-27fa-4c26-acc0-ec551985eb76/volumes/kubernetes.io~projected/kube-api-access-fl697:{mountpoint:/var/lib/kubelet/pods/ea34ff7e-27fa-4c26-acc0-ec551985eb76/volumes/kubernetes.io~projected/kube-api-access-fl697 major:0 minor:749 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea34ff7e-27fa-4c26-acc0-ec551985eb76/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/ea34ff7e-27fa-4c26-acc0-ec551985eb76/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:748 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/volumes/kubernetes.io~secret/serving-cert major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~projected/kube-api-access-jrhct:{mountpoint:/var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~projected/kube-api-access-jrhct major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1066 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~projected/kube-api-access-h8p7w:{mountpoint:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~projected/kube-api-access-h8p7w major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~projected/kube-api-access-p9dfn:{mountpoint:/var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~projected/kube-api-access-p9dfn major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~secret/metrics-certs major:0 minor:467 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3033e86-fee0-45dc-ba74-d5448a777400/volumes/kubernetes.io~projected/kube-api-access-grmch:{mountpoint:/var/lib/kubelet/pods/f3033e86-fee0-45dc-ba74-d5448a777400/volumes/kubernetes.io~projected/kube-api-access-grmch major:0 minor:367 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/kube-api-access-5trxh:{mountpoint:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/kube-api-access-5trxh major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~secret/metrics-tls major:0 minor:464 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~projected/kube-api-access-98j7c:{mountpoint:/var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~projected/kube-api-access-98j7c major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:600 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~projected/kube-api-access-55zwh:{mountpoint:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~projected/kube-api-access-55zwh major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~secret/cert major:0 minor:469 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:473 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/b7f0551f790d4e9259e587ec1641c9a6da7b371e90cacd06b0b8afea64076ff7/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1010:{mountpoint:/var/lib/containers/storage/overlay/c1c0fb45b24ec07cf9aa9dc2d357fd8765f04d4a7a7c5bf9e48bdc2d2fe7cd9c/merged major:0 minor:1010 fsType:overlay blockSize:0} overlay_0-1012:{mountpoint:/var/lib/containers/storage/overlay/ed2639e519c41301384e472040b3ebad5de8b0b82d52ba56c6b57a05f259f387/merged major:0 minor:1012 fsType:overlay blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/9ee630525adc0df6bfa1594b080800b6db9194d9b3984e9d60f3b59ac3c69795/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/28232966fc44909d60ee7f3cc58a69b0e2b9e215cc4133ae4445503aa0458cb2/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/f7a91ba0c576dbbf4897bc835e7b9514616a5484f1131296e3714a7bb2add073/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-1029:{mountpoint:/var/lib/containers/storage/overlay/7ef63e434d6bafda06bf29532ce787dae7a73de21f8ec2bff99f91a96ca32a73/merged major:0 minor:1029 fsType:overlay blockSize:0} overlay_0-1031:{mountpoint:/var/lib/containers/storage/overlay/d70f6923065f642f61c6a302c648e2d621270ddfd815ac6b612c3ad4ff863a37/merged major:0 minor:1031 fsType:overlay blockSize:0} overlay_0-1033:{mountpoint:/var/lib/containers/storage/overlay/ef541a1e35fe6ee233e12bee85facfd8c2fbc75df9e776c390ea67927391d48f/merged major:0 minor:1033 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/328cc30361df82a02584cbf2a763fd882416dc84fbbf25f1eb2b8a091a53e3e1/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1044:{mountpoint:/var/lib/containers/storage/overlay/a59c98f8a71bae4f6779fe70e786a1100c46de7199e5ffd983105f0cc053e9a1/merged major:0 minor:1044 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/56f3fa3cb00585f07894b3900567f19cd3943ed0820afeabbb9c1b9976b74764/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1052:{mountpoint:/var/lib/containers/storage/overlay/0380edbed2f18332bb11a60da0ba31180b66340193e14c0c2bd3c6c39762de49/merged major:0 minor:1052 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/91d4581d8cd78cec6b607f521ad2787612cd2aaf7f1b42a4b5c425a1f1b7cd02/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/41e23514274abc4c424555dab3a75bc6870409d458ee6cba0e89a5c91d75cee4/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-1076:{mountpoint:/var/lib/containers/storage/overlay/3c55150ddf94c9ad3f3851cea5f64bc239ffb35cee614cc07790d8a07ba5d607/merged major:0 minor:1076 fsType:overlay blockSize:0} overlay_0-1078:{mountpoint:/var/lib/containers/storage/overlay/b8782aaa02609b3030695db1d765d601b008da0aa46d82b127e2dae45d3ff7e0/merged major:0 minor:1078 fsType:overlay blockSize:0} overlay_0-1080:{mountpoint:/var/lib/containers/storage/overlay/45afcefbab4e02d9c464f3c89e506983cc695eddd72c7dd874e5a5c7d2612b5b/merged major:0 minor:1080 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/9c89d3c36830654bc910cefd9a53685ab49c26474209527b499a54d2dc7403cc/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1090:{mountpoint:/var/lib/containers/storage/overlay/8e8e75db697c27637b509b1f370c539a3acbbcb75f3d29058b60aeb016210a47/merged major:0 minor:1090 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/e7324ceb8ae6795e2cc8deae45dde3d04628ece15ebebc168382a5862186e769/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/8a4418fd377e524224a924802614703be5028537a04e34c22560db504231b964/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1099:{mountpoint:/var/lib/containers/storage/overlay/755c2476c97f3e7ebeb3fcfefcd3d4f1f53d62346d04620df3330a465d6d662d/merged major:0 minor:1099 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/51315601588b05b9d577e82c215e5b9d4a2de05e9be7dd68b12e3ccf19e1296c/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1101:{mountpoint:/var/lib/containers/storage/overlay/e15231b40697b3c3c78ead686a858d3cf21e8708223ffa9fcd650d3cc966c4bf/merged major:0 minor:1101 fsType:overlay blockSize:0} overlay_0-1103:{mountpoint:/var/lib/containers/storage/overlay/d2fea66d5d5f646bdc2bb9e04965dc2fd06c35373748fd2abcd735807525338b/merged major:0 minor:1103 fsType:overlay blockSize:0} overlay_0-1118:{mountpoint:/var/lib/containers/storage/overlay/0747056261e3970e7e7de2a488e9d74b90e3f4254f621b875e8da12ee8405a4e/merged major:0 minor:1118 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/be3e7fed4ace561093e7a55cb9671e6646e4b164322b7c529b295eedcb179608/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/75a922787dd830ef9688867493534f07a7a4a05956d92d470c84b3747b615e79/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/2c28b0799860641ab43b365bba6fd6deea961a347aab7b02346b93d3b419cf5a/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-1148:{mountpoint:/var/lib/containers/storage/overlay/b8a49b484d3e5011dc78dd47c7d6cd6ea34919d98cd255fb013680a7be98a820/merged major:0 minor:1148 fsType:overlay blockSize:0} overlay_0-1155:{mountpoint:/var/lib/containers/storage/overlay/76a55d9a6f1748c832907d852d4e186510f300f01d3a1e85b1bdd84bd720e023/merged major:0 minor:1155 fsType:overlay blockSize:0} overlay_0-1163:{mountpoint:/var/lib/containers/storage/overlay/137a90efc247e51f8965dd979f1a99aff0f4e111e9ceb2f1dfc9e271354f8ca6/merged major:0 minor:1163 fsType:overlay blockSize:0} overlay_0-1165:{mountpoint:/var/lib/containers/storage/overlay/e41bc2a310a9cb72254cedadb3e02615cbab9aae4e918e61d98ce6d3413d2cd8/merged major:0 minor:1165 fsType:overlay blockSize:0} overlay_0-1167:{mountpoint:/var/lib/containers/storage/overlay/e2d65faf7dd47fd3a752760ae37c4f7ec6513e9b1843de61be39d9d007141507/merged major:0 minor:1167 fsType:overlay blockSize:0} overlay_0-117:{mountpoint:/var/lib/containers/storage/overlay/3f28c1460e05712af4ef99772323ac71e557385ae841a3e1c719f846261adb9a/merged major:0 minor:117 fsType:overlay blockSize:0} overlay_0-1173:{mountpoint:/var/lib/containers/storage/overlay/93c232d430e92154f81950b2fb5fd9e6bae770137e0f6703cfa19ce718f1e3bd/merged major:0 minor:1173 fsType:overlay blockSize:0} overlay_0-1185:{mountpoint:/var/lib/containers/storage/overlay/20735545020c65e42811896f9a43202cf2fc7367d8534796ea32470b98680ffc/merged major:0 minor:1185 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/092bb9f366da324c393e14b68c64775a6535116c2a57fc59cfc3f7fb05dd2d5c/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1193:{mountpoint:/var/lib/containers/storage/overlay/48b86470471c687569708b36a61e7eb02d5d52fc3b9b444d6ef38db624f00580/merged major:0 minor:1193 fsType:overlay blockSize:0} overlay_0-1195:{mountpoint:/var/lib/containers/storage/overlay/3d4903f9ad5e7d3fde7ef2a0c8dcaa52e2e0d2ec362e12edaae6e32e95e15118/merged major:0 minor:1195 fsType:overlay blockSize:0} overlay_0-1205:{mountpoint:/var/lib/containers/storage/overlay/a527673573f4c6daa97936433b1c06835ffdfb67c24672de7b415ee59940c86c/merged major:0 minor:1205 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/5f202d04cd1b857ba9d84b656d848ea137b7304614dbd93071965d89855cabc5/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/f186833a5294fa1b6b8db524c652029c04a1311e2d9135c9b5c7a88cb91550a3/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-1224:{mountpoint:/var/lib/containers/storage/overlay/d03e9d49fa4ed4cc9f9867beccfae656da5899e0c8bf85f080010702e0b90f7c/merged major:0 minor:1224 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/8f030d4ef427335519c7c4860b808d0fd4281eff1af384f795942d886bfee2f7/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/09dd65f493d1bfc2d6bfbcedaeae27248f9e547de9ac397ac31ef3f34bf605f2/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/1160aac67a776a5c0ac3b43107ef0dd6a64de95e3f618b88f04aa5c45858980c/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-145:{mountpoint:/var/lib/containers/storage/overlay/ebf97b0f8d4139acce2032a3b358832cb2a8a3ed004bdb46c319ac33ce9f5c1e/merged major:0 minor:145 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/611b546a073ffe1d4bc64a5ed52c21c0b2487d2d7228cb02c8be7667a8782247/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/6115caa13996427048ed77dafc462f1a6a8229d09c0f3505b87e04af5b812ef2/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/2d547e8f15944913babddadd93a6a64d0c93d66500dedcfc24a1f43fca428186/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/ddc6a10ff34bd9051f71d06262b0212f36b178faf3119d1afc6287c4f18a7868/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/50c31523fc98ee9ad8c89a5c7becdab72c1f9082ba9a3f83908edbab96bb113f/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/b85d9e75cdc9bb125a2d87afd471dbaf333dd84c039dc3278c3441b5e3dda12e/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/fb79309b447269cf4f3e9e4237d8d0e9a4d1cef082fcd6509129c04db7b55998/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/6d85c40ca368b51328ba3809cb38b2886d9094fdc1de4fdc4bdd4919f65b26cc/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/ebbf897fe843d00b20b2dbbcb7a74d04c54323c4fa1bd5bd56aa7863af5ddbc8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/c3f1ba8e36111adb5cd1969876084945ed98684c65a962de8d588e8624234162/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/fe72b238e05c936faa97c18a1b91c6e64a09f038824e8204dd84a0e50df6d40d/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/04aa083832ffea3f422199e4f39ab995168e07582b05397cbb17e49eebdf72de/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/a36798aa3d9273b91ed5bdcfb87cb3788c7eeca462850f6f337519bf42a2dcd1/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/3126eecda6f4dc7ebf1a8847c8757ec57a1308595f9eb6b0a25fc58f38bc8e5a/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/205ed43c60b29132ab40eca10004c10df86adf4ccadb97dd10b36a4e85cf4b14/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/f8139dad7fa7db917cb97d2d68a508de283a0678544e8bf9401c85f280b344a0/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/facb76d782da8aee3773409c7ba9a73ab8130527113c0ca4637488124f3812f6/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/3c79bbcc273d41b4ad58ddf940c7c68c70f557d8b03b9aa7a31a08d0558b3a00/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/9677e1ab02b447f591512896b96b77274c1b9d26814ca78e5d21b688e34a4224/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/a46b9d763c1dea9e124c3550d7ee2ecfbfa08d99e3158a28bd2f071c543e29e4/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/3a9b659362ea67fa7db9e03817e6a3d422cfe2cf0ef1cc769847d092dd4f7f05/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/3b644279042c82404607fce4b8123ea39016085cbb3f6e49d50fddcda1bec701/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/cf9c35d5966fbd8db2ac6ae3f21ac6df23d8123ec3717315638e09d006ff32cb/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/7263ac980487d0ecc386858b3ef4eabacf2d6412c025bd422d6c3a5877e074da/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/937e72a6b6dea06e7ed4c28b77c68430df83bb5f59c4a904da2ddfccdd940f5d/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/df2db2e8cb8f3e5c26bd91bf47520cc2d2b15fc360a0b5385f39d3aa1647799e/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/a71b5a4239963acbd392f677f644a24aa46fa062a4ec107ad0b85cdf9efe4766/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/12405b8d102630076cff1acc8970de4d913b6bfe437b7467c8144fa58ef5248e/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/961c363abe901520e0ed9a2392bd9fd3f9b7f3dd715848748b87df3c3796079d/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/2e70d07593425c7a9540448a97bef4febd84ae81901559f4a599070cf8850237/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/e3f280eb755f1dcd98cd85e6a07d20430c084ad52ebf1a9bf2bab7ae05194325/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/183526a69544ccf1a5b5fd4fe61f80c7d0480064782735c6f934f016c77e1114/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/1875486db438a52412c856acb2bacdc4e11896b20065849aedda78e22ebcbec4/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/a85f3449b3bf04b5c7bae8528041edd0fd5a398ee4334d71f8b769d8767d8131/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/03a6d5af6caf84d45e085d8d397e246abd470bcf262a1bb9d2343f30b1dd94bd/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/06a79ac92eac758ddb4e2a4eaa7ab828f1a8f7772195800aca47932df9b51f64/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/63a4891b1826b5021b1ab29f61432fe11fb0e0596ce4487276ac626278c6e6b2/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/621bc94d70d0402bd4e935d66eeaa844dcb20420242255ad0994025bd48a33c4/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/b2d79af1cd50a523ea7877edca5503fb5071714010f25d9737c0644c55fb350b/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/0b67fe43a4491e82b75f4decc9f9a9aa7144edffbd24c1e4d0247dcbf7908112/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/d07a6c095b06b7d6efb1c2c6fe15bea3d6079e9dfd319b2e3947d8d386d86c08/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/24c2c849c5d9e9ba978e1b1bc3e1b8f4ea7ad7aa4e08ae5637576fedb308acad/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-373:{mountpoint:/var/lib/containers/storage/overlay/262a7402ee396c171a35fe346e314976b4b476925ce9b2400aaec1634fc8aa18/merged major:0 minor:373 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/df0dd829cffceed6358381095ec922d463d4af98c348fae55ec491f93bdb23c3/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/6b7f75eae861d648d99ae9da13ebcf29cea13421f3396637cd3d3804a4c3bdd4/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/5aeb4e28b18aef9afb96d9f39ad0aed1a6df50fd6c15bdfaf3ac1e4aa21af98d/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/ab4fb39c0e714858eeee369049275c580ac23a3bc15916ee3ca768684ef7bce5/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/d1afcacb560781e9bbac4e3f1a29b84c8404df27c2672a8648f7fc27270a3d6c/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/d18a83844412a8d5f7ba54b68d971b6cdb26cc829cc9c92ae13f49aaa809f21a/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/f7daf9e6e318ef70eb952b18817a02c56f22a4b4834f971f92e5b16951cf95a3/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/c62fbd94514ca319d181954653cdf204da3b4b8aa6d636c75b126da3867eb14f/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/7da141a243c2de1fcfd9bbc72afae7f777e7dc2c250c6519035a1c2d57db6846/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-423:{mountpoint:/var/lib/containers/storage/overlay/c543ad78a852e490ff44f70684be5f08b41e3dd6aa5af208954376d703cbbdfa/merged major:0 minor:423 fsType:overlay blockSize:0} overlay_0-430:{mountpoint:/var/lib/containers/storage/overlay/0ada5b4d4c1e493f6c7cb9218e7a78dd5b1aa9b7877ac44f3aa03003e7e476bc/merged major:0 minor:430 fsType:overlay blockSize:0} overlay_0-431:{mountpoint:/var/lib/containers/storage/overlay/ee82e03610bfefa1ba61cb9f2056ae6eaa0962adb9a6ff3df866451909b16ba5/merged major:0 minor:431 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/6c47e1ecbc2ce3545cb88bd9fed73b13fd137ad4b19d76dd87606c1c3f86e36f/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-449:{mountpoint:/var/lib/containers/storage/overlay/b588ce4f0f8f06fdd31066964ca8feecd9f0e02262424e369ac010bf0be47ea0/merged major:0 minor:449 fsType:overlay blockSize:0} overlay_0-458:{mountpoint:/var/lib/containers/storage/overlay/7d9b5b7e7baf8ce9483e8f4b5a09139fea2ee0e2545a8ab9dcb205a4a87d14b0/merged major:0 minor:458 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/6501d07e8316376002ed64538c6baa4962963851cf585b11b626c3f03545a415/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-482:{mountpoint:/var/lib/containers/storage/overlay/88a891ce3a6d67e0e2a80566cff8949e5b7075270147f59f8cced5f012506d9c/merged major:0 minor:482 fsType:overlay blockSize:0} overlay_0-484:{mountpoint:/var/lib/containers/storage/overlay/96656f9ab2499fd1387791a90f0416c4f509fcccf5d86e8097c522660d4e4308/merged major:0 minor:484 fsType:overlay blockSize:0} overlay_0-495:{mountpoint:/var/lib/containers/storage/overlay/9a9c32d898a45969be739cdffc635700735d93ba0ae3d434eda204dd94e6db54/merged major:0 minor:495 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/da2e716be0df74045cb8ef1be4a1660135fbb2a862c829733288ec6bca7bbe7a/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/aeb8d8d82fe94a8a4098d90d6cf9d580db5475f1ac21f44b40ebd629548b0e16/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/8e5cef56a2a9fc42cdbe5281957d9e32670f75089c93712358c50b595b228955/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/8e656fe24b7bbe7dd7319655f953f460148ae03deba86de826ce526ea8eb8026/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/f1d292f8b8cfd98b1ae55e61c07520f9746299b5b9f67e7825b3f1ecbad1164e/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/525a717ed40d0b1e619ae24bf25a769b15d6a94c860893ac5bd57a7c5dad1411/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-533:{mountpoint:/var/lib/containers/storage/overlay/fc936bc20947fe007d6cbcd1cec1a2321c3f90ad2ca6031e648093b6e93b3fb9/merged major:0 minor:533 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/0311cf4098b62042b08c4eb950d3a354bd1158745a1249af258d4ae7cf022c4a/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-551:{mountpoint:/var/lib/containers/storage/overlay/002fbdef2c393c1498084d24af7842ae50e65907fb85551fd3322cd0be736c30/merged major:0 minor:551 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/4111e6f23714f307e6188539cb6002d1f0819674803d1f280ff0e52dc5239bd2/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-559:{mountpoint:/var/lib/containers/storage/overlay/8c5e3a4ea3dfff8931322e2d09412969f539cb4c588cbb88a5a970fceefb5d29/merged major:0 minor:559 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/c61bc95ed3b33f3b3e7a2ef5bb082d306abb353e48a5c82d6fdf9cc754c16214/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-562:{mountpoint:/var/lib/containers/storage/overlay/bb1a186239f74f82849648bbfd60f1b627f70f2c508a08c49b46cf8008f96b29/merged major:0 minor:562 fsType:overlay blockSize:0} overlay_0-564:{mountpoint:/var/lib/containers/storage/overlay/a3154f8f639e796667e171414c0445f50ec7d4d129ad22cc1a6da41d61eca68a/merged major:0 minor:564 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/a3924784a2cbb69f1def826cc0a0150875a25a89a97b2d6a2a5733fa0a8e8729/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-569:{mountpoint:/var/lib/containers/storage/overlay/60a0e9c5d8c195d1dae74a7623a6de5a9cc48d5412dd631841325e2b1c34b4e6/merged major:0 minor:569 fsType:overlay blockSize:0} overlay_0-57:{mountpoint:/var/lib/containers/storage/overlay/208a5fd8ec97989bd3f5f29580e84e99add8338ed0481207be8e790632c00408/merged major:0 minor:57 fsType:overlay blockSize:0} overlay_0-571:{mountpoint:/var/lib/containers/storage/overlay/e3ad5f1847c01400072c179b9f4a029d75660b597b4b0a4a6b667d06081bd43b/merged major:0 minor:571 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/b14d1c31015c26b13aa80b668fc9cbae467de515f03733531166bc590832485e/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/b397db8e4d22d6b59a7c9a633f58a7f77a75a18893b0a00b7d1ff936e837395d/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-580:{mountpoint:/var/lib/containers/storage/overlay/d874907a73864c8b6c5806073d8fc9facc80bf4658dcb2f24ca2d24e05e07a99/merged major:0 minor:580 fsType:overlay blockSize:0} overlay_0-582:{mountpoint:/var/lib/containers/storage/overlay/e45efb5edba61f801a4ceb65c37a6ff4cde7f70c893764c10a34cec4abc62afa/merged major:0 minor:582 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/4b28b48a194ac4b91b0ade34a46e36af46fc6b32c48c3ce3ca053a337eed1c68/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-593:{mountpoint:/var/lib/containers/storage/overlay/9192a1bc00400d097cf64c199a0e13da34623e173ab8c9682976867edac88f5e/merged major:0 minor:593 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/40b108ab57d3d18086a35e90fe95d2f5986803fe568433726afcc43a365f4eaf/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-604:{mountpoint:/var/lib/containers/storage/overlay/a532bb72efcfb5652de54a23d6b249f8614f0b745bfdaf8c1830c134541fa625/merged major:0 minor:604 fsType:overlay blockSize:0} overlay_0-616:{mountpoint:/var/lib/containers/storage/overlay/5f7174327481f3264dc48c652f49eb699669d300afa4e17ea8ebf2d61c979dae/merged major:0 minor:616 fsType:overlay blockSize:0} overlay_0-617:{mountpoint:/var/lib/containers/storage/overlay/1ef1957612abfcde6f39e73e21cbefb199e49e6590d83674e3128714fc7559cd/merged major:0 minor:617 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/e2c29f1d5590ea68894a433c40c76647f4460598f65ae455974cde795f20facf/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-626:{mountpoint:/var/lib/containers/storage/overlay/05882cea85b0e81005d8b84585d8673643a088e6dcbfff8b0c41f76073feebee/merged major:0 minor:626 fsType:overlay blockSize:0} overlay_0-629:{mountpoint:/var/lib/containers/storage/overlay/d1bf386ea9ebafe1b1ace4fb08796debd1aa9aa80fff015094cc0041286fcac4/merged major:0 minor:629 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/9287ed053bd13c0b0b4e51d4ba70abe1c715cac8b7bfc1c9772faa7d4de06882/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-634:{mountpoint:/var/lib/containers/storage/overlay/9a3b744b271ea54448e39cea751f26d9d34be2ab256f169b896aeb97d61cb0ed/merged major:0 minor:634 fsType:overlay blockSize:0} overlay_0-636:{mountpoint:/var/lib/containers/storage/overlay/73f6625a01e6b2f57dbbd27561ac23ee7749fe4b70abbfbd3ad7ac01eced3d45/merged major:0 minor:636 fsType:overlay blockSize:0} overlay_0-640:{mountpoint:/var/lib/containers/storage/overlay/35160d1b650b207b152b84bcde8c9e32a60bb7765ab73c7627264ecbf0fecbaa/merged major:0 minor:640 fsType:overlay blockSize:0} overlay_0-643:{mountpoint:/var/lib/containers/storage/overlay/a41fdbc9a14ab1c45ba85cc65aef79682bfd89940d230f748460e843c72c247d/merged major:0 minor:643 fsType:overlay blockSize:0} overlay_0-644:{mountpoint:/var/lib/containers/storage/overlay/9ac88c6e3d3ee504620355680bb85b0b28cf0ecdac9739dcabd5d44ff2d93d0e/merged major:0 minor:644 fsType:overlay blockSize:0} overlay_0-648:{mountpoint:/var/lib/containers/storage/overlay/89a3d7100761df269e4dcff55d6988c91e65c2b01cea48c468f12cb04899ad0a/merged major:0 minor:648 fsType:overlay blockSize:0} overlay_0-65:{mountpoint:/var/lib/containers/storage/overlay/2888d1ddda22353778567b7724c093cc49c28db687da8d1b62cf264681178bd1/merged major:0 minor:65 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/d31e68165eca5ff6d1c520df790ba12819cf97294e1516f0a157857d4b3c5853/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-660:{mountpoint:/var/lib/containers/storage/overlay/e73726bb2699cf760c64fb4d0c408a46a112b906046c4b80ec047c5a5f2d5b5e/merged major:0 minor:660 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/5b53a50ba2fb811f3f948c726d5f50f725586d40c65ba307927090fbd5394f74/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-667:{mountpoint:/var/lib/containers/storage/overlay/3eb6b77926197342e40f5b38c75e9323061b433cab72bdb35f086cd2142876b3/merged major:0 minor:667 fsType:overlay blockSize:0} overlay_0-669:{mountpoint:/var/lib/containers/storage/overlay/7b0b895f3096f7f1264d36cd922aa4e5c86feaef5d010e21b306c827dbb3fae8/merged major:0 minor:669 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/c6b9107e2ba53b1060ef4802a64a4f3df0967cf33b3f0894f0a1eb6c3ff4f02f/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/20274a38092338d8d3aab0b942af547a96952e9c74761e78a373c3dfda6887a3/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-683:{mountpoint:/var/lib/containers/storage/overlay/1fac03e17b5ca2daf49ceb71fc112f78ba4659b2a4369e24bd27a21c98cf3061/merged major:0 minor:683 fsType:overlay blockSize:0} overlay_0-709:{mountpoint:/var/lib/containers/storage/overlay/48f527a0d6af50335ef417d2f2554b8bfafd865483d1853e2a68e5a934e2f426/merged major:0 minor:709 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/633ae504b28ce5e1798cbbefe25626ae7409b734ea2b1a83ebe6c44175101104/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-725:{mountpoint:/var/lib/containers/storage/overlay/34ce57ce937665291f51da07c81f0bbeb71bd270416304a371191989ed9ed74f/merged major:0 minor:725 fsType:overlay blockSize:0} overlay_0-729:{mountpoint:/var/lib/containers/storage/overlay/4876a81493d645d2ca97b5088daa1ff521f2e29fca73dba75b8738e62a1f44b7/merged major:0 minor:729 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/c64e8d00a6631f7d329e1f37831ae5514c9a78763e0e8582f978eca2752ebb52/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/253f7ae8c049cb780187f95d0ec3ffeda715698690d142f6059b9083347d9197/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/7c978abf48bb48d0a61828a008322d6d633e398f6febf35a0ebf6955aca7595b/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-746:{mountpoint:/var/lib/containers/storage/overlay/bd8499db433e2a15d37f60b65a95e8939a448be79501b5015a6f9b4e446ee15e/merged major:0 minor:746 fsType:overlay blockSize:0} overlay_0-752:{mountpoint:/var/lib/containers/storage/overlay/ffafbb756a30d2389be741400a9a8d1be63498abbd6cfe0db74d7687e1eeced4/merged major:0 minor:752 fsType:overlay blockSize:0} overlay_0-765:{mountpoint:/var/lib/containers/storage/overlay/e8d6ba8371ffd07d2e73f7046fbd43dfbf7c280e82827030f2afbbcc080724c Mar 09 16:46:13.981298 master-0 kubenswrapper[32968]: 5/merged major:0 minor:765 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/586538bdd2c18bf54f1a4fa2bf7895ed14f3c47014ae40692a600391ad14198f/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-772:{mountpoint:/var/lib/containers/storage/overlay/4b5e140bb9d5fbe37aeb67998f151d5cd74bf51326bb49663f09fec2993f941a/merged major:0 minor:772 fsType:overlay blockSize:0} overlay_0-774:{mountpoint:/var/lib/containers/storage/overlay/2352d6a1b3f891721884b95c9d7980d15e8daa86ea1dad976501c564f83a001f/merged major:0 minor:774 fsType:overlay blockSize:0} overlay_0-779:{mountpoint:/var/lib/containers/storage/overlay/686c01fa86f743ad9e2f3eadc06e5410664139ca535609b9753ec3080df0e258/merged major:0 minor:779 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/7a9dd719d5decc5278a923387e2569e02867285e1ec299a4bcd18024d6437921/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-785:{mountpoint:/var/lib/containers/storage/overlay/fb8b647c4ba746c42ea7f722c06d5942aefe7714f5397244a9f1a0e5dcbe382b/merged major:0 minor:785 fsType:overlay blockSize:0} overlay_0-810:{mountpoint:/var/lib/containers/storage/overlay/18b481e4f0f4e11df095c3a2b584e0d530bb38f952e11a906c474f9b729153c2/merged major:0 minor:810 fsType:overlay blockSize:0} overlay_0-812:{mountpoint:/var/lib/containers/storage/overlay/cb5dee39e450a82c6ccab44f416265dd37aa1ebee1fccaa2d0018ae6e72c875b/merged major:0 minor:812 fsType:overlay blockSize:0} overlay_0-835:{mountpoint:/var/lib/containers/storage/overlay/85778aeec28366abcf50411884bffaba6a063a6426f528faa0522c7003225cda/merged major:0 minor:835 fsType:overlay blockSize:0} overlay_0-837:{mountpoint:/var/lib/containers/storage/overlay/0cb6bbe31d7693c8c64118a3572a7482931f1d62b70cf951f1fe9f5cc3f718b1/merged major:0 minor:837 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/a99ca07977f61fc12044e985000970834250083d29b38fa32ec818da50f5b39f/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/0d76d9ff8a535d746c91ec66c719996f03ab0f93f49719d42cdf7c137029f55e/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-856:{mountpoint:/var/lib/containers/storage/overlay/458bbb738713139b6a69c921e2c4a7b9f9f999801ccfc23579472238c09ad80c/merged major:0 minor:856 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/45452cc4e54a778e01860a5cdc7d2a92b75ff8b591782538a5fe405cecfa423d/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/078d8f0af9016b4dde8dc37aa751e726d541e5404fc47b319650164ca45cef03/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/12f9ee3f36eeb706360266f9734c8d328cba188852892a91d638b93038119784/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/1a5179a1c49224b42258a438c660d79dfa1ef4379e70538c6def858962fd5988/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/79b818d03c51ffe4b1694be583793f52a7a3a9f8056ac2631b383b5635d1dfb1/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/68ad11af07fcd37893c9f2e91056fd61c11739a890c06f3e2ecdb58ee3412bc9/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/bf16ade37e9e347864c26bb0c900a11664c3248d0defb78b4bf4b3794172739c/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-887:{mountpoint:/var/lib/containers/storage/overlay/02cd9a4dd180778a98f7f073e0ddabc075c74da96af07d82ee173c24d1539219/merged major:0 minor:887 fsType:overlay blockSize:0} overlay_0-888:{mountpoint:/var/lib/containers/storage/overlay/609130c9017d9ee3e3c6c02fc0ead1878d58048e77c5454f828ebdcbfb0d9603/merged major:0 minor:888 fsType:overlay blockSize:0} overlay_0-890:{mountpoint:/var/lib/containers/storage/overlay/4e82e527dab0de194bb161639466e999bdcfed03605a26f94e96f141fac7743e/merged major:0 minor:890 fsType:overlay blockSize:0} overlay_0-896:{mountpoint:/var/lib/containers/storage/overlay/36d603048618b900b8a23a77acafb83bf6c051d5c03fea4f52cc27a0bdbfd683/merged major:0 minor:896 fsType:overlay blockSize:0} overlay_0-898:{mountpoint:/var/lib/containers/storage/overlay/ef9300f1e4dde56db07e011d6b1a26f930432e9859d13aae40580dca2f6cc665/merged major:0 minor:898 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/2f22a9e0ceb712368c9ae6f49c231cfd69ffe08862609a5213e19fd7e6a42adc/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-901:{mountpoint:/var/lib/containers/storage/overlay/c4aa601012dd7b44eaea8f27b28a714d30a55493b6b9c33b0c45d5c857a09331/merged major:0 minor:901 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/fbd6931e805d09df060d0f5869441cc9241d366304486cee46a09a6acc8625c9/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/b9c7df604bfd4b140355e86900e8867f1bcfb473cd5e034e6b4791e4a103ff88/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-910:{mountpoint:/var/lib/containers/storage/overlay/bd43d058577f77a3c56083b4392a1afce99e88b82001fb282fe1fbdc5cd87eba/merged major:0 minor:910 fsType:overlay blockSize:0} overlay_0-913:{mountpoint:/var/lib/containers/storage/overlay/3ac0453ccf2b249f365f149a1eb44197084e0084c129bd50e13bfa894d9dbaf4/merged major:0 minor:913 fsType:overlay blockSize:0} overlay_0-919:{mountpoint:/var/lib/containers/storage/overlay/eadecde04008c6ecab331b5354b64ba044067a82bff5a0f42c12b4e1616778b8/merged major:0 minor:919 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/26ce079e8dbfd75ceec724d859d913ce5d17ae95b161db96106bfa3c87950211/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-929:{mountpoint:/var/lib/containers/storage/overlay/bde25d74a4d61c3da1b932f354ef137a1615188f815fe911eed0912a16e8ff42/merged major:0 minor:929 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/3c266a769afb48934934b9954b9f36a7bd07957bac18ef278cab56974aff1697/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/0798a1fe6d8dd30286aeb5f074abffe9cf427c4c91feb52c3a2795de78fce83b/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-934:{mountpoint:/var/lib/containers/storage/overlay/11ab91a7b044c6b2883309626bdc935a9174e27aa62aa4dffa350ba9b6565530/merged major:0 minor:934 fsType:overlay blockSize:0} overlay_0-943:{mountpoint:/var/lib/containers/storage/overlay/0b2c69bbb0f3abf527c0ab8b5084d1dfd2838f619d85262947023f302ae12daa/merged major:0 minor:943 fsType:overlay blockSize:0} overlay_0-972:{mountpoint:/var/lib/containers/storage/overlay/dc73a8ab8442c72b90b5d6aabc90fda7b02de179dc237d1d01a449c40452d1ac/merged major:0 minor:972 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/d0b1e9341036fa6bd6d2de4dc0456a0c16c1544c072620a75b93ba1a557057be/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-987:{mountpoint:/var/lib/containers/storage/overlay/1d575607c3594d516719b17c716ca4b3d3cd554c351c8069fdae7a054aa478f2/merged major:0 minor:987 fsType:overlay blockSize:0}] Mar 09 16:46:14.014386 master-0 kubenswrapper[32968]: I0309 16:46:14.012991 32968 manager.go:217] Machine: {Timestamp:2026-03-09 16:46:14.012113801 +0000 UTC m=+0.115436361 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:f32a84ce369a40d4b790587e3ee415c9 SystemUUID:f32a84ce-369a-40d4-b790-587e3ee415c9 BootID:14726782-964f-4d13-8ec1-f1921737ccdf Filesystems:[{Device:/run/containers/storage/overlay-containers/a9b628cdb80b26fca66723feadbd65d1a0479ac8b305d4bb2d0a1150e9146e96/userdata/shm DeviceMajor:0 DeviceMinor:753 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/70eddae976602b0fd7a417da85764552e2ce702063285733d01e52d020ee14c3/userdata/shm DeviceMajor:0 DeviceMinor:1189 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-571 DeviceMajor:0 DeviceMinor:571 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a62ba179-443d-424f-8cff-c75677e8cd5c/volumes/kubernetes.io~projected/kube-api-access-z242f DeviceMajor:0 DeviceMinor:244 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~projected/kube-api-access-98llp DeviceMajor:0 DeviceMinor:137 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7bb1ade7135b46fd5c4d6dd8420520ed7e496d3520bdd197b24cd39361e4974/userdata/shm DeviceMajor:0 DeviceMinor:385 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-430 DeviceMajor:0 DeviceMinor:430 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1066 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1205 DeviceMajor:0 DeviceMinor:1205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a6cd9347-eec9-4549-9de4-6033112634ce/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:807 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-629 DeviceMajor:0 DeviceMinor:629 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-644 DeviceMajor:0 DeviceMinor:644 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:136 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb4bd8cef53e72d659379e281e583b2e2ff3d1ae2b420acbf269067cfbc2882a/userdata/shm DeviceMajor:0 DeviceMinor:1071 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e91a0e23-c95b-4290-9c0c-29101febfc8f/volumes/kubernetes.io~projected/kube-api-access-26xps DeviceMajor:0 DeviceMinor:1188 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-449 DeviceMajor:0 DeviceMinor:449 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-482 DeviceMajor:0 DeviceMinor:482 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b33cc8c866e566d8db69ec2714025c50a89f231d3fe1b8f3f84ec92a664fd47/userdata/shm DeviceMajor:0 DeviceMinor:611 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33c56041dc9d339a8096c2a35d53acb4dde7c5410f33acc082e8c4c46e221ea6/userdata/shm DeviceMajor:0 DeviceMinor:1073 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~projected/kube-api-access-kvh62 DeviceMajor:0 DeviceMinor:126 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-582 DeviceMajor:0 DeviceMinor:582 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1159 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79d594aa020700806dd9e44316eef12fd128d94f7dc4e9551c946af4ab6e32f2/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-373 DeviceMajor:0 DeviceMinor:373 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8ed14624fda42261a13dd0229ffa468f16bc90c4a3c65851f679126f89bd762/userdata/shm DeviceMajor:0 DeviceMinor:362 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:470 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0f0a39d805a27ae6402fcdfc0601eab19733f53f21a52d2a798a59ad90607729/userdata/shm DeviceMajor:0 DeviceMinor:319 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de208e3a2ab24fcacb3a925a722bd645634c3c69d34c97d39fd21af088ce4d70/userdata/shm DeviceMajor:0 DeviceMinor:102 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-551 DeviceMajor:0 DeviceMinor:551 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a320d845-3a5d-4027-a765-f0b2dc07f9de/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:805 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1075 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/1ba020e0-1728-4e56-9618-d0ec3d9126eb/volumes/kubernetes.io~projected/kube-api-access-tnw68 DeviceMajor:0 DeviceMinor:112 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-774 DeviceMajor:0 DeviceMinor:774 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1195 DeviceMajor:0 DeviceMinor:1195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/baf704e3-daf2-4934-a04e-d31df8df0c4a/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:814 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-593 DeviceMajor:0 DeviceMinor:593 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/696fcca2-df1a-491d-956d-1cfda1ee5e70/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1178 Capacity:200003584 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-810 DeviceMajor:0 DeviceMinor:810 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1029 DeviceMajor:0 DeviceMinor:1029 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~projected/kube-api-access-rrt7m DeviceMajor:0 DeviceMinor:99 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49d5e328c8ae7739c3f9bf91ece9d3a14759dce6582c64fdaa51d38259fb6d04/userdata/shm DeviceMajor:0 DeviceMinor:1005 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6788f7a40b77011605c70f1a8a04a398749caf9b6fc2edbcd8e5648805b8f8e6/userdata/shm DeviceMajor:0 DeviceMinor:516 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be856881-2ceb-4803-8330-4a27ad8b1937/volumes/kubernetes.io~projected/kube-api-access-v98bk DeviceMajor:0 DeviceMinor:816 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:465 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~projected/kube-api-access-hkrlr DeviceMajor:0 DeviceMinor:241 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d/volumes/kubernetes.io~projected/kube-api-access-wn8hj DeviceMajor:0 DeviceMinor:804 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1163 DeviceMajor:0 DeviceMinor:1163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/357570a4-f69b-4970-9b6f-fe06fc4c2f90/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:734 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5534d85f0a9fe740eb26ccac2e47ce52d44e3f557fa5be108af8630168b4e7ab/userdata/shm DeviceMajor:0 DeviceMinor:759 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6bef13556b054eeec06112dd3efb63b9b2d0c3aa5b54369f3f112afc33fa6fa0/userdata/shm DeviceMajor:0 DeviceMinor:737 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1064 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5587e967-124e-4f2a-b7fb-42cb16bfc337/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:809 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:510 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-896 DeviceMajor:0 DeviceMinor:896 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:247 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e4f6f154c0ec1b09b3b7820eed793121d9068c5d693186b39e540c2972df7faf/userdata/shm DeviceMajor:0 DeviceMinor:314 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-667 DeviceMajor:0 DeviceMinor:667 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1002 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/608688d561d24b6906960660d5e2edc9bd06afaeaeeaca5e96ca0b4cdea64b30/userdata/shm DeviceMajor:0 DeviceMinor:426 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-617 DeviceMajor:0 DeviceMinor:617 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1080 DeviceMajor:0 DeviceMinor:1080 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~projected/kube-api-access-psgk6 DeviceMajor:0 DeviceMinor:228 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~projected/kube-api-access-4zxck DeviceMajor:0 DeviceMinor:249 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a/userdata/shm DeviceMajor:0 DeviceMinor:96 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:472 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e765395-7c6b-4cba-9a5a-37ba888722bb/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:605 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe006380d1e36eb88db9d3ab71f40a33e55bbcd5d71cd8ca531aa0535a202808/userdata/shm DeviceMajor:0 DeviceMinor:692 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1090 DeviceMajor:0 DeviceMinor:1090 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1167 DeviceMajor:0 DeviceMinor:1167 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~projected/kube-api-access-pr46z DeviceMajor:0 DeviceMinor:234 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~projected/kube-api-access-98j7c DeviceMajor:0 DeviceMinor:230 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/53d006bb096c33feedc1376ff3068c0efd56153db916f72ddbcc8de717b1c134/userdata/shm DeviceMajor:0 DeviceMinor:313 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/57aaf330726fe627a8a61909fad0b332f97b99d8101a20fb9a743ae449fbfca5/userdata/shm DeviceMajor:0 DeviceMinor:1088 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ac1759b18ef6f3a5d8d448ff7a72c6622b588c67072b3c619de1db8258e2cc7/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:414 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1042 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/073aa9cb06334299c5f2786863d371a99d5ceae50e199996f6bf33c71ae8308e/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79ef2ec1abfe2471da50c84133ba573002a31131516bb5efe8dcb8952c2f3409/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-636 DeviceMajor:0 DeviceMinor:636 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6b4992e-50f3-473c-aa83-ed35569ba307/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:740 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-746 DeviceMajor:0 DeviceMinor:746 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d6912539-9b06-4e2c-b6a8-155df31147f2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:237 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-943 DeviceMajor:0 DeviceMinor:943 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af4aa8d4-09e1-4589-b7bf-885617a11337/volumes/kubernetes.io~projected/kube-api-access-whqdm DeviceMajor:0 DeviceMinor:445 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91158bad31d126f335930945d685253a8862c41cc0ef9e00a780fb2229ca874e/userdata/shm DeviceMajor:0 DeviceMinor:821 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-752 DeviceMajor:0 DeviceMinor:752 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:407 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-887 DeviceMajor:0 DeviceMinor:887 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7a8a0d67ea36ee3a994c26d0ebe85810170dde1bf5672599a73bf8bf6d568a5b/userdata/shm DeviceMajor:0 DeviceMinor:612 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39187f3f3774db7f1cd32a1eade411cde2d6032989cb572717b605403bb05a46/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-580 DeviceMajor:0 DeviceMinor:580 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1052 DeviceMajor:0 DeviceMinor:1052 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:542 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/volumes/kubernetes.io~projected/kube-api-access-4gkxg DeviceMajor:0 DeviceMinor:794 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1127 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-616 DeviceMajor:0 DeviceMinor:616 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1078 DeviceMajor:0 DeviceMinor:1078 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca/userdata/shm DeviceMajor:0 DeviceMinor:523 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:494 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1018 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1118 DeviceMajor:0 DeviceMinor:1118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:462 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-929 DeviceMajor:0 DeviceMinor:929 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1165 DeviceMajor:0 DeviceMinor:1165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6d47955b-b85c-4137-9dea-ff0c20d5ab77/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:464 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/a6cd9347-eec9-4549-9de4-6033112634ce/volumes/kubernetes.io~projected/kube-api-access-lcvbf DeviceMajor:0 DeviceMinor:796 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:220 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/c72e89f0-37ad-4515-89ba-ba1f52ba61f0/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:424 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-626 DeviceMajor:0 DeviceMinor:626 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3745c679-2ea9-4382-9270-4d3fbbaaf296/volumes/kubernetes.io~projected/kube-api-access-jgj24 DeviceMajor:0 DeviceMinor:724 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~projected/kube-api-access-nl7dv DeviceMajor:0 DeviceMinor:236 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5b2e2b8431e578f6680e8136b12cf396552c4aea8bb6288c6f61287f345382bf/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-709 DeviceMajor:0 DeviceMinor:709 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-772 DeviceMajor:0 DeviceMinor:772 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-837 DeviceMajor:0 DeviceMinor:837 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b9030c9-7f5f-4e54-ae93-140469e3558b/volumes/kubernetes.io~projected/kube-api-access-782hr DeviceMajor:0 DeviceMinor:246 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90ca2fa02f79332177c148a9e6c26855ea8345957c6f930d8d2630124445c84d/userdata/shm DeviceMajor:0 DeviceMinor:506 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd39f0db4c8cb49b906ba36723dbeb15b7ced8a9a0505c21a799794cabf48a9c/userdata/shm DeviceMajor:0 DeviceMinor:826 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc76c96b8711e8cdb111dc9420b888c11e659ac13def57c18f19053474f6d217/userdata/shm DeviceMajor:0 DeviceMinor:824 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/61ca985b701119ca3bc5cf79646c9b786ed15e0cf89939a4c8d105994f958559/userdata/shm DeviceMajor:0 DeviceMinor:1048 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/f3033e86-fee0-45dc-ba74-d5448a777400/volumes/kubernetes.io~projected/kube-api-access-grmch DeviceMajor:0 DeviceMinor:367 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~projected/kube-api-access-rrms4 DeviceMajor:0 DeviceMinor:1024 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/746ef340944994ce9a968afe481306c3a960527b0c894fdef2a59f09558cc35d/userdata/shm DeviceMajor:0 DeviceMinor:511 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-145 DeviceMajor:0 DeviceMinor:145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:477 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5587e967-124e-4f2a-b7fb-42cb16bfc337/volumes/kubernetes.io~projected/kube-api-access-4dzfq DeviceMajor:0 DeviceMinor:711 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:501 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1031 DeviceMajor:0 DeviceMinor:1031 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02/userdata/shm DeviceMajor:0 DeviceMinor:525 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/457f42a7-f14c-4d61-a87a-bc1ed422feed/volumes/kubernetes.io~projected/kube-api-access-497s5 DeviceMajor:0 DeviceMinor:235 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4fe13e40d8f70d12ef39c31f6912e2f6997171e0a974be29b5b2e5483842c703/userdata/shm DeviceMajor:0 DeviceMinor:802 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-890 DeviceMajor:0 DeviceMinor:890 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~projected/kube-api-access-p9dfn DeviceMajor:0 DeviceMinor:123 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:456 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1010 DeviceMajor:0 DeviceMinor:1010 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e97466a-7c33-4efb-a961-14024d913a21/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-934 DeviceMajor:0 DeviceMinor:934 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1065 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-729 DeviceMajor:0 DeviceMinor:729 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes/kubernetes.io~projected/kube-api-access-hw4zf DeviceMajor:0 DeviceMinor:517 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~projected/kube-api-access-sst4g DeviceMajor:0 DeviceMinor:240 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-458 DeviceMajor:0 DeviceMinor:458 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/631f2bdf-2ed4-4315-98c3-c5a538d0aec3/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:731 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~projected/kube-api-access-6n2qw DeviceMajor:0 DeviceMinor:801 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8501bf68ce95bbeaffdb2360973b36355c548ca399a4580b05e931dc935338ae/userdata/shm DeviceMajor:0 DeviceMinor:1161 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:541 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/34c0b60e-da69-452d-858d-0af77f18946d/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:732 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-562 DeviceMajor:0 DeviceMinor:562 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18f0164f-0875-4668-b155-df69e05e8ae0/volumes/kubernetes.io~projected/kube-api-access-pq2bk DeviceMajor:0 DeviceMinor:646 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1157 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/426c862cbd44263b6137c6ff9a9827045365f6b3f02b29e72da05e433127947c/userdata/shm DeviceMajor:0 DeviceMinor:820 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1155 DeviceMajor:0 DeviceMinor:1155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b43b8b247bcf7dd91e3dade29e3c0373e4989b5f279bccec521a6e0e7ca4f4e0/userdata/shm DeviceMajor:0 DeviceMinor:69 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/5565c060-5952-4e85-8873-18bb80663924/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f75363920ad64e4c78b1f0d1f173c7de25c0e0f55f2579de303c797301bd76d5/userdata/shm DeviceMajor:0 DeviceMinor:502 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:479 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-559 DeviceMajor:0 DeviceMinor:559 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-785 DeviceMajor:0 DeviceMinor:785 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~projected/kube-api-access-whqvw DeviceMajor:0 DeviceMinor:125 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-898 DeviceMajor:0 DeviceMinor:898 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b963c0b550fd8020bc9825f99df227668deb1ae10545aef13e051c423fc551b/userdata/shm DeviceMajor:0 DeviceMinor:429 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/87b176bfed491d23a5eac46cd3a9a97ac570ad47784a45949a2c9acf53d5102d/userdata/shm DeviceMajor:0 DeviceMinor:727 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/c72e89f0-37ad-4515-89ba-ba1f52ba61f0/volumes/kubernetes.io~projected/kube-api-access-h8jvl DeviceMajor:0 DeviceMinor:415 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a200a12ef51900dc0279235bd7709ecea56165d840345196baa3b66d5c325ea/userdata/shm DeviceMajor:0 DeviceMinor:722 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:790 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:799 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-856 DeviceMajor:0 DeviceMinor:856 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9a4035c483ccb665ee714811dce3e885485fca3dbbbfca3a333a197a59c1abfa/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-533 DeviceMajor:0 DeviceMinor:533 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-564 DeviceMajor:0 DeviceMinor:564 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1173 DeviceMajor:0 DeviceMinor:1173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2d3c20a-f92e-433b-9fbc-b667b7bcf175/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1103 DeviceMajor:0 DeviceMinor:1103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/baf704e3-daf2-4934-a04e-d31df8df0c4a/volumes/kubernetes.io~projected/kube-api-access-nhglf DeviceMajor:0 DeviceMinor:815 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-117 DeviceMajor:0 DeviceMinor:117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8796f37c-d1ec-469d-90df-e007bf620e8c/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:800 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1001 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:94 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-779 DeviceMajor:0 DeviceMinor:779 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-423 DeviceMajor:0 DeviceMinor:423 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8663cef33748a7bf8ddabf2e8fe22249ef66e9b5f0f42e008eddcf3a9a74a9f6/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/volumes/kubernetes.io~projected/kube-api-access-p8rjs DeviceMajor:0 DeviceMinor:428 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~projected/kube-api-access-kc2t2 DeviceMajor:0 DeviceMinor:480 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-972 DeviceMajor:0 DeviceMinor:972 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/461d651c2983a3280f7f697edd78a39f969f73ae2b43066899a6cd798fe74203/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/af4aa8d4-09e1-4589-b7bf-885617a11337/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:440 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/aec186fc-aead-47fb-a7e1-8c9325897c47/volumes/kubernetes.io~projected/kube-api-access-vj9cq DeviceMajor:0 DeviceMinor:715 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/1da6f189-535a-4bf1-bbdb-758327651ae2/volumes/kubernetes.io~projected/kube-api-access-xgl27 DeviceMajor:0 DeviceMinor:817 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c400ace13e0290ea978d90a75cda129235df657b46ef5808d10268996d05129a/userdata/shm DeviceMajor:0 DeviceMinor:1007 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1023 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-65 DeviceMajor:0 DeviceMinor:65 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/788337cf1e09325f2236882f1ea9cfff779af178f88c34c2eda040e13b5fdf04/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:469 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~projected/kube-api-access-xkjv9 DeviceMajor:0 DeviceMinor:213 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/68d95e05ad27d2105d13bcbb6ce1233f9b530be643a1070361b913794693ff4f/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1193 DeviceMajor:0 DeviceMinor:1193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dc732d23-37bc-41c2-9f9b-333ba517c1f8/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:500 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1012 DeviceMajor:0 DeviceMinor:1012 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34c0b60e-da69-452d-858d-0af77f18946d/volumes/kubernetes.io~projected/kube-api-access-vmdb8 DeviceMajor:0 DeviceMinor:735 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~projected/kube-api-access-jrhct DeviceMajor:0 DeviceMinor:1068 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d2a3afb8eb1e0a8c25b36f8e7877fb572cd427c87f5ea499b36180c2a18273c/userdata/shm DeviceMajor:0 DeviceMinor:475 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-495 DeviceMajor:0 DeviceMinor:495 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f965b971-7e9a-4513-8450-b2b527609bd6/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:600 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-901 DeviceMajor:0 DeviceMinor:901 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e4895f22-8fcd-4ace-96d8-bc2e18a67891/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:463 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kuberne Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: tes.io~projected/kube-api-access-dxlnq DeviceMajor:0 DeviceMinor:1003 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1099 DeviceMajor:0 DeviceMinor:1099 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca6c68bebab4667be94ed8d4950c8443a1dc101549e30dea2fc49d8db92f1da8/userdata/shm DeviceMajor:0 DeviceMinor:1153 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf28b7d0809ac175ca8dafdc77ee725bc1d96f36498a2808890144589ffa9764/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c76178f6-3f0b-4b7d-ad23-724b83e35120/volumes/kubernetes.io~projected/kube-api-access-2mr7t DeviceMajor:0 DeviceMinor:550 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2196a2b6120faa0a67dddbba1ab37ef9d1b821632322a4076c71fe4a5abd57ef/userdata/shm DeviceMajor:0 DeviceMinor:823 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/df2ec8b2-02d7-40c4-ac20-32615d689697/volumes/kubernetes.io~projected/kube-api-access-rfj7p DeviceMajor:0 DeviceMinor:98 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/kube-api-access-5trxh DeviceMajor:0 DeviceMinor:227 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-640 DeviceMajor:0 DeviceMinor:640 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34a4491c-12cc-4531-ad3e-246e93ed7842/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8be2517a-6f28-4289-a108-6e3345a1e246/volumes/kubernetes.io~projected/kube-api-access-hh9fx DeviceMajor:0 DeviceMinor:744 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~projected/kube-api-access-h8p7w DeviceMajor:0 DeviceMinor:1129 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1185 DeviceMajor:0 DeviceMinor:1185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-634 DeviceMajor:0 DeviceMinor:634 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c93fb5d-373d-4473-99dd-50e4398bafbf/volumes/kubernetes.io~projected/kube-api-access-nl5kt DeviceMajor:0 DeviceMinor:461 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:481 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae9bffea87b1c17f19561e0c0bfd5953f59d9425ed2be72004b89a80da980210/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8972b380-8f87-4b73-8f95-440d34d03884/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:739 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48ea3b1c1a43df7f7909a26935d767da157bf2e1b5a1c65e482d9227e70712b8/userdata/shm DeviceMajor:0 DeviceMinor:1182 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-913 DeviceMajor:0 DeviceMinor:913 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1101 DeviceMajor:0 DeviceMinor:1101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-660 DeviceMajor:0 DeviceMinor:660 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16090dface4ebfac4ce59503c1b97e63c47315ed98b676af9cb614a7646af5db/userdata/shm DeviceMajor:0 DeviceMinor:757 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/3ec3050d-8e6f-466a-995a-f78270408a85/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:818 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/631f2bdf-2ed4-4315-98c3-c5a538d0aec3/volumes/kubernetes.io~projected/kube-api-access-shpfl DeviceMajor:0 DeviceMinor:736 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-765 DeviceMajor:0 DeviceMinor:765 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35632f3eff4c27d52976478ab10425da5e046ac8fcff6eb2dc1b92a71e399460/userdata/shm DeviceMajor:0 DeviceMinor:750 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:999 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-683 DeviceMajor:0 DeviceMinor:683 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-643 DeviceMajor:0 DeviceMinor:643 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1158 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48249480743afb1557ba264af8f59f88c34e220ee454b5474f5f834aad81feec/userdata/shm DeviceMajor:0 DeviceMinor:138 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~projected/kube-api-access-bdmsj DeviceMajor:0 DeviceMinor:229 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-725 DeviceMajor:0 DeviceMinor:725 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-57 DeviceMajor:0 DeviceMinor:57 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-569 DeviceMajor:0 DeviceMinor:569 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1224 DeviceMajor:0 DeviceMinor:1224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/7937ccab-a6fb-4401-a4fd-7a2a91a7193f/volumes/kubernetes.io~projected/kube-api-access-cm4ff DeviceMajor:0 DeviceMinor:308 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ebbec674-ac49-422a-9548-5c29b15ad44d/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1067 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/709aad35-08ca-4ff5-abe5-e1558c8dc83f/volumes/kubernetes.io~projected/kube-api-access-579rp DeviceMajor:0 DeviceMinor:268 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b0a3a4ee0305c897e72b7253be6cebaee1b1c6c54eed95437052e11964c648c2/userdata/shm DeviceMajor:0 DeviceMinor:756 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-987 DeviceMajor:0 DeviceMinor:987 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c4dfdcc-e182-4831-98e4-1eedb069bcf6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:232 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/357570a4-f69b-4970-9b6f-fe06fc4c2f90/volumes/kubernetes.io~projected/kube-api-access-495rn DeviceMajor:0 DeviceMinor:738 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1060 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-604 DeviceMajor:0 DeviceMinor:604 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:127 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e93b4b4f200d56ef8323128adb8803f45fd9510b5dfe152e914167559d4662b8/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/66004239a17fd7bc97d7f3971acf1ba033b37e34b26d1d3808dcbd70a06e0a98/userdata/shm DeviceMajor:0 DeviceMinor:797 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/553046f43046d3fa77eb28600092cf144252c67ea18629a73915a18e4207a5c0/userdata/shm DeviceMajor:0 DeviceMinor:214 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1148 DeviceMajor:0 DeviceMinor:1148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b97657aafcf4ed4b7c8c8ead4ffccb037edbe9dd2764c1eb20b8b0101936b61e/userdata/shm DeviceMajor:0 DeviceMinor:487 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e68e729dc16b7303d9fa69af7f0d39f2249d9f66e6c9ceb43ec2254fd7af17fe/userdata/shm DeviceMajor:0 DeviceMinor:1025 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:473 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1143 Capacity:200003584 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1044 DeviceMajor:0 DeviceMinor:1044 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-431 DeviceMajor:0 DeviceMinor:431 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-910 DeviceMajor:0 DeviceMinor:910 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2e38be5-1d33-4171-b27f-78a335f1590b/volumes/kubernetes.io~projected/kube-api-access-ctsqs DeviceMajor:0 DeviceMinor:243 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/8be2517a-6f28-4289-a108-6e3345a1e246/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:741 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ea34ff7e-27fa-4c26-acc0-ec551985eb76/volumes/kubernetes.io~projected/kube-api-access-fl697 DeviceMajor:0 DeviceMinor:749 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92bd7735-8e3c-43bb-b543-03e6e6c5142a/volumes/kubernetes.io~projected/kube-api-access-dv8rh DeviceMajor:0 DeviceMinor:1069 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e91a0e23-c95b-4290-9c0c-29101febfc8f/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1184 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9851e44d22a4912195681afea0e67c8f9b72db3658de58af22ee3dada2512884/userdata/shm DeviceMajor:0 DeviceMinor:405 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/550675cc793416636547bf85e3f7c0ac6b1a7b142b9ca52ae64847f31b9d610e/userdata/shm DeviceMajor:0 DeviceMinor:761 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ec3050d-8e6f-466a-995a-f78270408a85/volumes/kubernetes.io~projected/kube-api-access-qsbkx DeviceMajor:0 DeviceMinor:846 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ea34ff7e-27fa-4c26-acc0-ec551985eb76/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:748 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9fc9e7d-652c-4063-9cdb-358f58cae29a/volumes/kubernetes.io~projected/kube-api-access-xnstc DeviceMajor:0 DeviceMinor:1070 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~projected/kube-api-access-rvkfn DeviceMajor:0 DeviceMinor:1160 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29f3efce623abd11180f220d3e9cf221f9f6cf57527de2211126a65b38f4186b/userdata/shm DeviceMajor:0 DeviceMinor:763 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e346cb5b-411d-4014-a8d0-590d8deee8ac/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1000 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6cf9eae5-38bc-48fa-8339-d0751bb18e8c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/360673ea108cd414a9191ef702491df26b4dd5cfe949286f6320af0b621bc778/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-919 DeviceMajor:0 DeviceMinor:919 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18f0164f-0875-4668-b155-df69e05e8ae0/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:647 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d15da434-241d-4a93-9ce3-f943d43bf2ce/volumes/kubernetes.io~projected/kube-api-access-vqcqb DeviceMajor:0 DeviceMinor:239 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/be86c85d-59b1-4279-8253-a998ca16cd4d/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:474 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1123 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/72739f4d-da25-493b-91ef-d2b64e9297dd/volumes/kubernetes.io~projected/kube-api-access-4p2nd DeviceMajor:0 DeviceMinor:233 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eb70b637ebcdf20545438ca3a9998bdd103e60d200280f4b769a5fd812b5a907/userdata/shm DeviceMajor:0 DeviceMinor:819 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~projected/kube-api-access-cvfgw DeviceMajor:0 DeviceMinor:1047 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/79a8ea87-c29a-4248-927f-6f1acfc494d7/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1152 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b676c70029ef5855abfa14f2003a0111186001d162750fabf1b8fa3de8da206e/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/004d1e93-2345-4e62-902c-33f9dbb0f397/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:471 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-484 DeviceMajor:0 DeviceMinor:484 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d03237905955f360835bd7e4b475cd410b822e7afcfaef604a65fcffa582546/userdata/shm DeviceMajor:0 DeviceMinor:825 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/9482fb93-c223-45ee-bde8-7667303270b6/volumes/kubernetes.io~projected/kube-api-access-qjf4p DeviceMajor:0 DeviceMinor:1004 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8972b380-8f87-4b73-8f95-440d34d03884/volumes/kubernetes.io~projected/kube-api-access-8hwnd DeviceMajor:0 DeviceMinor:742 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/e5c4ccb0-f795-44bd-9bb4-baf84564c239/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1046 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1128 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1033 DeviceMajor:0 DeviceMinor:1033 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d4a0f99378ff486b79217185409909dec619d9e2dc5b5592edac2f0fa8b54029/userdata/shm DeviceMajor:0 DeviceMinor:610 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-835 DeviceMajor:0 DeviceMinor:835 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-648 DeviceMajor:0 DeviceMinor:648 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a320d845-3a5d-4027-a765-f0b2dc07f9de/volumes/kubernetes.io~projected/kube-api-access-868cs DeviceMajor:0 DeviceMinor:795 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e45cfdc1870c4b26d14186540965c4e800d97239af6f9721bc9508ed1ef9bb4/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes/kubernetes.io~projected/kube-api-access-rl5cz DeviceMajor:0 DeviceMinor:518 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c/userdata/shm DeviceMajor:0 DeviceMinor:80 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84104ab7e1b72f886c929b832bd4c63b55c1be85a47b0371043d9ca15fb4d4ab/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/57036838-9f42-4ea1-a5c9-77f820cc22c9/volumes/kubernetes.io~projected/kube-api-access-czkqg DeviceMajor:0 DeviceMinor:400 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef122f26-bfae-44d2-a70a-8507b3b47332/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:467 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a612208-f777-486f-9dde-048b2d898c7f/volumes/kubernetes.io~projected/kube-api-access-j244n DeviceMajor:0 DeviceMinor:248 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-669 DeviceMajor:0 DeviceMinor:669 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6b4992e-50f3-473c-aa83-ed35569ba307/volumes/kubernetes.io~projected/kube-api-access-bhzzg DeviceMajor:0 DeviceMinor:743 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:478 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0abf3880d15b208436550d7a101ca3242c6cc95826cf42d21ea5b482ae9b8344/userdata/shm DeviceMajor:0 DeviceMinor:522 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9dc2251ac339285f7e616265d59b743eecae28fcec97875a6787ff662520db27/userdata/shm DeviceMajor:0 DeviceMinor:994 Capacity:67108864 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-1076 DeviceMajor:0 DeviceMinor:1076 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f606b775-bf22-4d64-abb4-8e0e24ddb5cd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:245 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:overlay_0-888 DeviceMajor:0 DeviceMinor:888 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-812 DeviceMajor:0 DeviceMinor:812 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a/volumes/kubernetes.io~projected/kube-api-access-fv95c DeviceMajor:0 DeviceMinor:238 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true} {Device:/var/lib/kubelet/pods/fa7f88a3-9845-49a3-a108-d524df592961/volumes/kubernetes.io~projected/kube-api-access-55zwh DeviceMajor:0 DeviceMinor:252 Capacity:32475512832 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0abf3880d15b208 MacAddress:9e:f1:59:bf:c3:ab Speed:10000 Mtu:8900} {Name:0f0a39d805a27ae MacAddress:82:17:27:0f:dc:f2 Speed:10000 Mtu:8900} {Name:1a200a12ef51900 MacAddress:86:8c:55:bc:f9:07 Speed:10000 Mtu:8900} {Name:1d2a3afb8eb1e0a MacAddress:0e:7c:8e:44:2b:10 Speed:10000 Mtu:8900} {Name:29f3efce623abd1 MacAddress:82:07:d8:81:86:de Speed:10000 Mtu:8900} {Name:33c56041dc9d339 MacAddress:6e:a6:8a:34:63:91 Speed:10000 Mtu:8900} {Name:35632f3eff4c27d MacAddress:b2:2d:61:c3:ab:ca Speed:10000 Mtu:8900} {Name:360673ea108cd41 MacAddress:72:0f:63:e1:ff:b2 Speed:10000 Mtu:8900} {Name:39187f3f3774db7 MacAddress:1a:18:3d:3a:34:3c Speed:10000 Mtu:8900} {Name:3d03237905955f3 MacAddress:ba:86:f6:7d:b9:76 Speed:10000 Mtu:8900} {Name:48ea3b1c1a43df7 MacAddress:1e:9b:56:83:46:28 Speed:10000 Mtu:8900} {Name:49d5e328c8ae773 MacAddress:06:e5:c4:13:e0:a3 Speed:10000 Mtu:8900} {Name:4fe13e40d8f70d1 MacAddress:fe:c4:3c:c2:2e:a6 Speed:10000 Mtu:8900} {Name:53d006bb096c33f MacAddress:46:df:38:8c:da:57 Speed:10000 Mtu:8900} {Name:54c99acd4595efc MacAddress:9e:b2:9d:95:4d:b3 Speed:10000 Mtu:8900} {Name:550675cc7934166 MacAddress:62:61:15:b8:7b:f5 Speed:10000 Mtu:8900} {Name:553046f43046d3f MacAddress:6a:ca:b2:d9:93:d7 Speed:10000 Mtu:8900} {Name:5534d85f0a9fe74 MacAddress:4e:b1:2b:b6:49:50 Speed:10000 Mtu:8900} {Name:5b2e2b8431e578f MacAddress:a2:b8:2d:12:99:c3 Speed:10000 Mtu:8900} {Name:608688d561d24b6 MacAddress:e2:37:98:7e:63:fc Speed:10000 Mtu:8900} {Name:61ca985b701119c MacAddress:76:c0:05:6b:29:8a Speed:10000 Mtu:8900} {Name:66004239a17fd7b MacAddress:12:3b:71:70:ff:b8 Speed:10000 Mtu:8900} {Name:6788f7a40b77011 MacAddress:0e:fb:19:82:c8:1e Speed:10000 Mtu:8900} {Name:68d95e05ad27d21 MacAddress:aa:1a:22:65:2c:da Speed:10000 Mtu:8900} {Name:6b963c0b550fd80 MacAddress:8a:32:5b:8f:88:7b Speed:10000 Mtu:8900} {Name:6dbe08db551f1aa MacAddress:72:16:b7:23:d2:fd Speed:10000 Mtu:8900} {Name:70eddae976602b0 MacAddress:8a:8a:af:3a:71:43 Speed:10000 Mtu:8900} {Name:746ef340944994c MacAddress:f6:29:3a:bb:6e:e3 Speed:10000 Mtu:8900} {Name:788337cf1e09325 MacAddress:9a:4d:24:72:90:1d Speed:10000 Mtu:8900} {Name:79d594aa0207008 MacAddress:96:b7:0c:7c:a7:18 Speed:10000 Mtu:8900} {Name:7a8a0d67ea36ee3 MacAddress:b6:3a:96:be:d5:58 Speed:10000 Mtu:8900} {Name:7cbb60752ad7307 MacAddress:fe:77:49:e2:40:5b Speed:10000 Mtu:8900} {Name:84104ab7e1b72f8 MacAddress:72:aa:d6:f9:08:d8 Speed:10000 Mtu:8900} {Name:8501bf68ce95bbe MacAddress:0e:73:6b:b6:00:d7 Speed:10000 Mtu:8900} {Name:8663cef33748a7b MacAddress:a6:75:06:bc:6d:e9 Speed:10000 Mtu:8900} {Name:87b176bfed491d2 MacAddress:e2:85:b2:b2:81:ca Speed:10000 Mtu:8900} {Name:90ca2fa02f79332 MacAddress:42:f9:25:97:e3:08 Speed:10000 Mtu:8900} {Name:91158bad31d126f MacAddress:4e:ae:b5:29:60:5f Speed:10000 Mtu:8900} {Name:9851e44d22a4912 MacAddress:46:f3:88:06:92:45 Speed:10000 Mtu:8900} {Name:9a4035c483ccb66 MacAddress:5a:5b:ed:e9:68:5c Speed:10000 Mtu:8900} {Name:9b33cc8c866e566 MacAddress:ca:42:0d:56:af:49 Speed:10000 Mtu:8900} {Name:a8ed14624fda422 MacAddress:52:62:18:33:07:6e Speed:10000 Mtu:8900} {Name:a9b628cdb80b26f MacAddress:5e:dc:62:36:55:b3 Speed:10000 Mtu:8900} {Name:ae9bffea87b1c17 MacAddress:36:9c:e0:5d:07:4e Speed:10000 Mtu:8900} {Name:b0a3a4ee0305c89 MacAddress:96:59:cd:ac:57:a0 Speed:10000 Mtu:8900} {Name:b676c70029ef585 MacAddress:36:3b:3d:b3:a0:87 Speed:10000 Mtu:8900} {Name:b97657aafcf4ed4 MacAddress:32:ed:23:8d:3a:a8 Speed:10000 Mtu:8900} {Name:bc76c96b8711e8c MacAddress:36:34:c6:ab:ca:35 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:1a:b9:9f:27:c4:b9 Speed:0 Mtu:8900} {Name:c400ace13e0290e MacAddress:de:08:a8:dd:f4:fb Speed:10000 Mtu:8900} {Name:ca6c68bebab4667 MacAddress:c2:88:76:43:79:6a Speed:10000 Mtu:8900} {Name:cf28b7d0809ac17 MacAddress:6e:17:f6:66:77:89 Speed:10000 Mtu:8900} {Name:d4a0f99378ff486 MacAddress:9e:02:83:79:e9:0b Speed:10000 Mtu:8900} {Name:d7bb1ade7135b46 MacAddress:fa:3e:a2:82:1c:93 Speed:10000 Mtu:8900} {Name:e4f6f154c0ec1b0 MacAddress:b6:94:ac:28:5b:bb Speed:10000 Mtu:8900} {Name:e93b4b4f200d56e MacAddress:86:e5:76:80:6e:73 Speed:10000 Mtu:8900} {Name:eb70b637ebcdf20 MacAddress:86:2d:ac:64:b0:b4 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:6a:59:6a Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:5c:d5:0d Speed:-1 Mtu:9000} {Name:fb4bd8cef53e72d MacAddress:de:60:bb:82:1a:7c Speed:10000 Mtu:8900} {Name:fd39f0db4c8cb49 MacAddress:76:d7:88:4b:32:f3 Speed:10000 Mtu:8900} {Name:fe006380d1e36eb MacAddress:12:b9:33:79:8b:1f Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:32:fa:9a:e0:19:26 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.014352 32968 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.014433 32968 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.014707 32968 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.014866 32968 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.014899 32968 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015134 32968 topology_manager.go:138] "Creating topology manager with none policy" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015144 32968 container_manager_linux.go:303] "Creating device plugin manager" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015156 32968 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015180 32968 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015218 32968 state_mem.go:36] "Initialized new in-memory state store" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015310 32968 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015387 32968 kubelet.go:418] "Attempting to sync node with API server" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015402 32968 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015435 32968 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015452 32968 kubelet.go:324] "Adding apiserver pod source" Mar 09 16:46:14.016008 master-0 kubenswrapper[32968]: I0309 16:46:14.015488 32968 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.016705 32968 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.016829 32968 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017068 32968 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017171 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017188 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017197 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017203 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017209 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017216 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017222 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017228 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017237 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017243 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017253 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017265 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017302 32968 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.017759 32968 server.go:1280] "Started kubelet" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.018466 32968 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.018522 32968 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.018779 32968 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.020032 32968 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 09 16:46:14.021091 master-0 kubenswrapper[32968]: I0309 16:46:14.020243 32968 server.go:449] "Adding debug handlers to kubelet server" Mar 09 16:46:14.018624 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 09 16:46:14.033329 master-0 kubenswrapper[32968]: I0309 16:46:14.029726 32968 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 09 16:46:14.033329 master-0 kubenswrapper[32968]: I0309 16:46:14.030598 32968 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 09 16:46:14.045997 master-0 kubenswrapper[32968]: I0309 16:46:14.045892 32968 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 09 16:46:14.045997 master-0 kubenswrapper[32968]: I0309 16:46:14.045968 32968 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 09 16:46:14.046303 master-0 kubenswrapper[32968]: E0309 16:46:14.046213 32968 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 09 16:46:14.046360 master-0 kubenswrapper[32968]: I0309 16:46:14.046300 32968 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-10 16:15:49 +0000 UTC, rotation deadline is 2026-03-10 12:46:45.342494559 +0000 UTC Mar 09 16:46:14.046360 master-0 kubenswrapper[32968]: I0309 16:46:14.046338 32968 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h0m31.296159079s for next certificate rotation Mar 09 16:46:14.046500 master-0 kubenswrapper[32968]: I0309 16:46:14.046466 32968 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 09 16:46:14.046500 master-0 kubenswrapper[32968]: I0309 16:46:14.046485 32968 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.046619 32968 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.047524 32968 factory.go:55] Registering systemd factory Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.047617 32968 factory.go:221] Registration of the systemd container factory successfully Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.048255 32968 factory.go:153] Registering CRI-O factory Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.048292 32968 factory.go:221] Registration of the crio container factory successfully Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.048505 32968 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.048544 32968 factory.go:103] Registering Raw factory Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.048564 32968 manager.go:1196] Started watching for new ooms in manager Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.048790 32968 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 09 16:46:14.049985 master-0 kubenswrapper[32968]: I0309 16:46:14.049471 32968 manager.go:319] Starting recovery of all containers Mar 09 16:46:14.063868 master-0 kubenswrapper[32968]: I0309 16:46:14.063754 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w" seLinuxMountContext="" Mar 09 16:46:14.063868 master-0 kubenswrapper[32968]: I0309 16:46:14.063852 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" volumeName="kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv" seLinuxMountContext="" Mar 09 16:46:14.063868 master-0 kubenswrapper[32968]: I0309 16:46:14.063867 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2ec8b2-02d7-40c4-ac20-32615d689697" volumeName="kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config" seLinuxMountContext="" Mar 09 16:46:14.063868 master-0 kubenswrapper[32968]: I0309 16:46:14.063881 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea34ff7e-27fa-4c26-acc0-ec551985eb76" volumeName="kubernetes.io/projected/ea34ff7e-27fa-4c26-acc0-ec551985eb76-kube-api-access-fl697" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063896 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5c4ccb0-f795-44bd-9bb4-baf84564c239" volumeName="kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063910 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a612208-f777-486f-9dde-048b2d898c7f" volumeName="kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063925 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec3050d-8e6f-466a-995a-f78270408a85" volumeName="kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063935 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063949 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="357570a4-f69b-4970-9b6f-fe06fc4c2f90" volumeName="kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063960 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="457f42a7-f14c-4d61-a87a-bc1ed422feed" volumeName="kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063969 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6cd9347-eec9-4549-9de4-6033112634ce" volumeName="kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063980 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fc9e7d-652c-4063-9cdb-358f58cae29a" volumeName="kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.063992 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064007 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" volumeName="kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064020 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064036 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064053 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" volumeName="kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064078 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064146 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f965b971-7e9a-4513-8450-b2b527609bd6" volumeName="kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064160 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064196 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aec186fc-aead-47fb-a7e1-8c9325897c47" volumeName="kubernetes.io/projected/aec186fc-aead-47fb-a7e1-8c9325897c47-kube-api-access-vj9cq" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064209 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebbec674-ac49-422a-9548-5c29b15ad44d" volumeName="kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064219 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064228 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e346cb5b-411d-4014-a8d0-590d8deee8ac" volumeName="kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064239 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a612208-f777-486f-9dde-048b2d898c7f" volumeName="kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064251 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config" seLinuxMountContext="" Mar 09 16:46:14.064273 master-0 kubenswrapper[32968]: I0309 16:46:14.064292 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064305 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" volumeName="kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064315 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8677cbd3-649f-41cd-8b8a-eadca971906b" volumeName="kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064325 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92bd7735-8e3c-43bb-b543-03e6e6c5142a" volumeName="kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064336 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064347 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5565c060-5952-4e85-8873-18bb80663924" volumeName="kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064359 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="709aad35-08ca-4ff5-abe5-e1558c8dc83f" volumeName="kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064375 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fc9e7d-652c-4063-9cdb-358f58cae29a" volumeName="kubernetes.io/empty-dir/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-textfile" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064389 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7dea5-9848-41f0-bf0b-ec70ec0380f1" volumeName="kubernetes.io/configmap/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-service-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064402 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064431 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b9030c9-7f5f-4e54-ae93-140469e3558b" volumeName="kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064451 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064465 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7937ccab-a6fb-4401-a4fd-7a2a91a7193f" volumeName="kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064476 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3033e86-fee0-45dc-ba74-d5448a777400" volumeName="kubernetes.io/projected/f3033e86-fee0-45dc-ba74-d5448a777400-kube-api-access-grmch" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064489 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af4aa8d4-09e1-4589-b7bf-885617a11337" volumeName="kubernetes.io/configmap/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-cabundle" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064501 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fc9e7d-652c-4063-9cdb-358f58cae29a" volumeName="kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064511 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fc9e7d-652c-4063-9cdb-358f58cae29a" volumeName="kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064520 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea34ff7e-27fa-4c26-acc0-ec551985eb76" volumeName="kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064530 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1da6f189-535a-4bf1-bbdb-758327651ae2" volumeName="kubernetes.io/projected/1da6f189-535a-4bf1-bbdb-758327651ae2-kube-api-access-xgl27" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064541 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" volumeName="kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064550 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064561 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064570 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea34ff7e-27fa-4c26-acc0-ec551985eb76" volumeName="kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064584 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5c4ccb0-f795-44bd-9bb4-baf84564c239" volumeName="kubernetes.io/projected/e5c4ccb0-f795-44bd-9bb4-baf84564c239-kube-api-access-cvfgw" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064598 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064609 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064628 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" volumeName="kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064642 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-image-import-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064656 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064674 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" volumeName="kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064686 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c76178f6-3f0b-4b7d-ad23-724b83e35120" volumeName="kubernetes.io/projected/c76178f6-3f0b-4b7d-ad23-724b83e35120-kube-api-access-2mr7t" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064698 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064710 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c72e89f0-37ad-4515-89ba-ba1f52ba61f0" volumeName="kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-kube-api-access-h8jvl" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064723 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2ec8b2-02d7-40c4-ac20-32615d689697" volumeName="kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064733 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064742 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8972b380-8f87-4b73-8f95-440d34d03884" volumeName="kubernetes.io/configmap/8972b380-8f87-4b73-8f95-440d34d03884-mcc-auth-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064750 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d1829b3-643f-4f79-b6de-ae6ca5e78d4a" volumeName="kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064761 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064771 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064784 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c76178f6-3f0b-4b7d-ad23-724b83e35120" volumeName="kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-tuned" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064792 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebbec674-ac49-422a-9548-5c29b15ad44d" volumeName="kubernetes.io/projected/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-api-access-jrhct" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064802 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064812 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="457f42a7-f14c-4d61-a87a-bc1ed422feed" volumeName="kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064852 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57036838-9f42-4ea1-a5c9-77f820cc22c9" volumeName="kubernetes.io/projected/57036838-9f42-4ea1-a5c9-77f820cc22c9-kube-api-access-czkqg" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064865 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8be2517a-6f28-4289-a108-6e3345a1e246" volumeName="kubernetes.io/projected/8be2517a-6f28-4289-a108-6e3345a1e246-kube-api-access-hh9fx" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064876 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-trusted-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064887 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064898 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7dea5-9848-41f0-bf0b-ec70ec0380f1" volumeName="kubernetes.io/projected/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-kube-api-access" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064910 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b9030c9-7f5f-4e54-ae93-140469e3558b" volumeName="kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064922 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d1143ae-d94a-43f2-8e75-95aae13a5c57" volumeName="kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064933 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8be2517a-6f28-4289-a108-6e3345a1e246" volumeName="kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064945 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e91a0e23-c95b-4290-9c0c-29101febfc8f" volumeName="kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064954 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebbec674-ac49-422a-9548-5c29b15ad44d" volumeName="kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064964 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" volumeName="kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064972 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631f2bdf-2ed4-4315-98c3-c5a538d0aec3" volumeName="kubernetes.io/projected/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-kube-api-access-shpfl" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.064988 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5c4ccb0-f795-44bd-9bb4-baf84564c239" volumeName="kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065037 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" volumeName="kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065047 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d15da434-241d-4a93-9ce3-f943d43bf2ce" volumeName="kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065056 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f0164f-0875-4668-b155-df69e05e8ae0" volumeName="kubernetes.io/projected/18f0164f-0875-4668-b155-df69e05e8ae0-kube-api-access-pq2bk" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065065 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d1143ae-d94a-43f2-8e75-95aae13a5c57" volumeName="kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065074 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8677cbd3-649f-41cd-8b8a-eadca971906b" volumeName="kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065111 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-client" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065123 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="72739f4d-da25-493b-91ef-d2b64e9297dd" volumeName="kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065131 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aec186fc-aead-47fb-a7e1-8c9325897c47" volumeName="kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-utilities" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065139 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9482fb93-c223-45ee-bde8-7667303270b6" volumeName="kubernetes.io/projected/9482fb93-c223-45ee-bde8-7667303270b6-kube-api-access-qjf4p" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065148 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a320d845-3a5d-4027-a765-f0b2dc07f9de" volumeName="kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065156 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be86c85d-59b1-4279-8253-a998ca16cd4d" volumeName="kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z" seLinuxMountContext="" Mar 09 16:46:14.065138 master-0 kubenswrapper[32968]: I0309 16:46:14.065186 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065197 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1da6f189-535a-4bf1-bbdb-758327651ae2" volumeName="kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-utilities" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065206 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9" volumeName="kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065214 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d1829b3-643f-4f79-b6de-ae6ca5e78d4a" volumeName="kubernetes.io/projected/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-kube-api-access-4gkxg" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065225 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3745c679-2ea9-4382-9270-4d3fbbaaf296" volumeName="kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-catalog-content" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065234 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6912539-9b06-4e2c-b6a8-155df31147f2" volumeName="kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065242 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065274 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="72739f4d-da25-493b-91ef-d2b64e9297dd" volumeName="kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065286 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7dea5-9848-41f0-bf0b-ec70ec0380f1" volumeName="kubernetes.io/secret/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065295 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebbec674-ac49-422a-9548-5c29b15ad44d" volumeName="kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065304 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be86c85d-59b1-4279-8253-a998ca16cd4d" volumeName="kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065323 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="df2ec8b2-02d7-40c4-ac20-32615d689697" volumeName="kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065358 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc732d23-37bc-41c2-9f9b-333ba517c1f8" volumeName="kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065369 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9" volumeName="kubernetes.io/projected/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-kube-api-access-rrms4" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065378 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8972b380-8f87-4b73-8f95-440d34d03884" volumeName="kubernetes.io/projected/8972b380-8f87-4b73-8f95-440d34d03884-kube-api-access-8hwnd" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065388 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" volumeName="kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065397 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065448 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e91a0e23-c95b-4290-9c0c-29101febfc8f" volumeName="kubernetes.io/projected/e91a0e23-c95b-4290-9c0c-29101febfc8f-kube-api-access-26xps" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065465 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065481 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a4491c-12cc-4531-ad3e-246e93ed7842" volumeName="kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065495 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" volumeName="kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065503 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9fc9e7d-652c-4063-9cdb-358f58cae29a" volumeName="kubernetes.io/projected/b9fc9e7d-652c-4063-9cdb-358f58cae29a-kube-api-access-xnstc" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065559 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065571 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d1829b3-643f-4f79-b6de-ae6ca5e78d4a" volumeName="kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065582 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065594 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e5c4ccb0-f795-44bd-9bb4-baf84564c239" volumeName="kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065631 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-serving-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065641 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065650 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c72e89f0-37ad-4515-89ba-ba1f52ba61f0" volumeName="kubernetes.io/empty-dir/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-cache" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065659 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc732d23-37bc-41c2-9f9b-333ba517c1f8" volumeName="kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065670 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc732d23-37bc-41c2-9f9b-333ba517c1f8" volumeName="kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065704 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebbec674-ac49-422a-9548-5c29b15ad44d" volumeName="kubernetes.io/empty-dir/ebbec674-ac49-422a-9548-5c29b15ad44d-volume-directive-shadow" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065720 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="baf704e3-daf2-4934-a04e-d31df8df0c4a" volumeName="kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065732 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be856881-2ceb-4803-8330-4a27ad8b1937" volumeName="kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-catalog-content" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065743 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c72e89f0-37ad-4515-89ba-ba1f52ba61f0" volumeName="kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-ca-certs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065751 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6cd9347-eec9-4549-9de4-6033112634ce" volumeName="kubernetes.io/projected/a6cd9347-eec9-4549-9de4-6033112634ce-kube-api-access-lcvbf" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.065996 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="baf704e3-daf2-4934-a04e-d31df8df0c4a" volumeName="kubernetes.io/configmap/baf704e3-daf2-4934-a04e-d31df8df0c4a-mcd-auth-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066006 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e97466a-7c33-4efb-a961-14024d913a21" volumeName="kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066014 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8796f37c-d1ec-469d-90df-e007bf620e8c" volumeName="kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066022 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-policies" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066032 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e97466a-7c33-4efb-a961-14024d913a21" volumeName="kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066041 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="357570a4-f69b-4970-9b6f-fe06fc4c2f90" volumeName="kubernetes.io/projected/357570a4-f69b-4970-9b6f-fe06fc4c2f90-kube-api-access-495rn" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066050 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3745c679-2ea9-4382-9270-4d3fbbaaf296" volumeName="kubernetes.io/projected/3745c679-2ea9-4382-9270-4d3fbbaaf296-kube-api-access-jgj24" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066059 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8796f37c-d1ec-469d-90df-e007bf620e8c" volumeName="kubernetes.io/empty-dir/8796f37c-d1ec-469d-90df-e007bf620e8c-tmpfs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066069 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066078 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/projected/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-kube-api-access-kc2t2" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066087 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec3050d-8e6f-466a-995a-f78270408a85" volumeName="kubernetes.io/projected/3ec3050d-8e6f-466a-995a-f78270408a85-kube-api-access-qsbkx" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066095 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1da6f189-535a-4bf1-bbdb-758327651ae2" volumeName="kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-catalog-content" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066103 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066111 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a4491c-12cc-4531-ad3e-246e93ed7842" volumeName="kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066120 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" volumeName="kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066129 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066141 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5587e967-124e-4f2a-b7fb-42cb16bfc337" volumeName="kubernetes.io/projected/5587e967-124e-4f2a-b7fb-42cb16bfc337-kube-api-access-4dzfq" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066150 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af4aa8d4-09e1-4589-b7bf-885617a11337" volumeName="kubernetes.io/secret/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-key" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066158 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9" volumeName="kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066185 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8677cbd3-649f-41cd-8b8a-eadca971906b" volumeName="kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066227 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" volumeName="kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066241 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6b4992e-50f3-473c-aa83-ed35569ba307" volumeName="kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-auth-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066252 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d" volumeName="kubernetes.io/projected/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-kube-api-access-wn8hj" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066260 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" volumeName="kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066268 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="709aad35-08ca-4ff5-abe5-e1558c8dc83f" volumeName="kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066302 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066313 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="004d1e93-2345-4e62-902c-33f9dbb0f397" volumeName="kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066322 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f0164f-0875-4668-b155-df69e05e8ae0" volumeName="kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066331 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066339 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/projected/79a8ea87-c29a-4248-927f-6f1acfc494d7-kube-api-access-rvkfn" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066348 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6cd9347-eec9-4549-9de4-6033112634ce" volumeName="kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066376 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af4aa8d4-09e1-4589-b7bf-885617a11337" volumeName="kubernetes.io/projected/af4aa8d4-09e1-4589-b7bf-885617a11337-kube-api-access-whqdm" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066390 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066399 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3745c679-2ea9-4382-9270-4d3fbbaaf296" volumeName="kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-utilities" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066407 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" volumeName="kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066415 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6912539-9b06-4e2c-b6a8-155df31147f2" volumeName="kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066469 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a4491c-12cc-4531-ad3e-246e93ed7842" volumeName="kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066478 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066485 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066496 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-trusted-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066506 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066516 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" volumeName="kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066530 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e97466a-7c33-4efb-a961-14024d913a21" volumeName="kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066540 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a62ba179-443d-424f-8cff-c75677e8cd5c" volumeName="kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066551 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea34ff7e-27fa-4c26-acc0-ec551985eb76" volumeName="kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066558 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8be2517a-6f28-4289-a108-6e3345a1e246" volumeName="kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066568 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-serving-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066579 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d15da434-241d-4a93-9ce3-f943d43bf2ce" volumeName="kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066595 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2aa6f3-f049-423a-a8f5-5d33fc214a7b" volumeName="kubernetes.io/empty-dir/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-cache" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066607 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631f2bdf-2ed4-4315-98c3-c5a538d0aec3" volumeName="kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066617 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066627 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" volumeName="kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066638 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6b4992e-50f3-473c-aa83-ed35569ba307" volumeName="kubernetes.io/secret/d6b4992e-50f3-473c-aa83-ed35569ba307-proxy-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066648 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/projected/8c93fb5d-373d-4473-99dd-50e4398bafbf-kube-api-access-nl5kt" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.066658 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92bd7735-8e3c-43bb-b543-03e6e6c5142a" volumeName="kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067022 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" volumeName="kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067060 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067077 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="004d1e93-2345-4e62-902c-33f9dbb0f397" volumeName="kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067103 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec3050d-8e6f-466a-995a-f78270408a85" volumeName="kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067113 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="696fcca2-df1a-491d-956d-1cfda1ee5e70" volumeName="kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067133 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc732d23-37bc-41c2-9f9b-333ba517c1f8" volumeName="kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067145 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8796f37c-d1ec-469d-90df-e007bf620e8c" volumeName="kubernetes.io/projected/8796f37c-d1ec-469d-90df-e007bf620e8c-kube-api-access-6n2qw" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067158 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a612208-f777-486f-9dde-048b2d898c7f" volumeName="kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067176 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" volumeName="kubernetes.io/projected/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-kube-api-access-dxlnq" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067196 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067212 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8be2517a-6f28-4289-a108-6e3345a1e246" volumeName="kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067223 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-client" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067234 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6912539-9b06-4e2c-b6a8-155df31147f2" volumeName="kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067252 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6b4992e-50f3-473c-aa83-ed35569ba307" volumeName="kubernetes.io/projected/d6b4992e-50f3-473c-aa83-ed35569ba307-kube-api-access-bhzzg" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067266 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067282 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34c0b60e-da69-452d-858d-0af77f18946d" volumeName="kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067327 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067342 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067362 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5565c060-5952-4e85-8873-18bb80663924" volumeName="kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067377 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" volumeName="kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067395 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8677cbd3-649f-41cd-8b8a-eadca971906b" volumeName="kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067407 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a320d845-3a5d-4027-a765-f0b2dc07f9de" volumeName="kubernetes.io/projected/a320d845-3a5d-4027-a765-f0b2dc07f9de-kube-api-access-868cs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067468 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be856881-2ceb-4803-8330-4a27ad8b1937" volumeName="kubernetes.io/projected/be856881-2ceb-4803-8330-4a27ad8b1937-kube-api-access-v98bk" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067537 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="457f42a7-f14c-4d61-a87a-bc1ed422feed" volumeName="kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067594 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2aa6f3-f049-423a-a8f5-5d33fc214a7b" volumeName="kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067625 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5587e967-124e-4f2a-b7fb-42cb16bfc337" volumeName="kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067651 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d1143ae-d94a-43f2-8e75-95aae13a5c57" volumeName="kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067669 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d1143ae-d94a-43f2-8e75-95aae13a5c57" volumeName="kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067692 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92bd7735-8e3c-43bb-b543-03e6e6c5142a" volumeName="kubernetes.io/projected/92bd7735-8e3c-43bb-b543-03e6e6c5142a-kube-api-access-dv8rh" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067719 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6cd9347-eec9-4549-9de4-6033112634ce" volumeName="kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067742 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="baf704e3-daf2-4934-a04e-d31df8df0c4a" volumeName="kubernetes.io/projected/baf704e3-daf2-4934-a04e-d31df8df0c4a-kube-api-access-nhglf" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067760 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34c0b60e-da69-452d-858d-0af77f18946d" volumeName="kubernetes.io/projected/34c0b60e-da69-452d-858d-0af77f18946d-kube-api-access-vmdb8" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067781 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2aa6f3-f049-423a-a8f5-5d33fc214a7b" volumeName="kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-ca-certs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067797 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d47955b-b85c-4137-9dea-ff0c20d5ab77" volumeName="kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067827 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c76178f6-3f0b-4b7d-ad23-724b83e35120" volumeName="kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-tmp" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067850 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6b4992e-50f3-473c-aa83-ed35569ba307" volumeName="kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067864 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e4895f22-8fcd-4ace-96d8-bc2e18a67891" volumeName="kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067885 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d1143ae-d94a-43f2-8e75-95aae13a5c57" volumeName="kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067907 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8796f37c-d1ec-469d-90df-e007bf620e8c" volumeName="kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067930 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aec186fc-aead-47fb-a7e1-8c9325897c47" volumeName="kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-catalog-content" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067953 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be856881-2ceb-4803-8330-4a27ad8b1937" volumeName="kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-utilities" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067970 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.067994 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-encryption-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068021 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e765395-7c6b-4cba-9a5a-37ba888722bb" volumeName="kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068037 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="79a8ea87-c29a-4248-927f-6f1acfc494d7" volumeName="kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068069 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" volumeName="kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068085 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="004d1e93-2345-4e62-902c-33f9dbb0f397" volumeName="kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068111 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5587e967-124e-4f2a-b7fb-42cb16bfc337" volumeName="kubernetes.io/configmap/5587e967-124e-4f2a-b7fb-42cb16bfc337-config-volume" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068125 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fa7f88a3-9845-49a3-a108-d524df592961" volumeName="kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068139 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" volumeName="kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068160 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b9030c9-7f5f-4e54-ae93-140469e3558b" volumeName="kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068181 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" volumeName="kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068204 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c93fb5d-373d-4473-99dd-50e4398bafbf" volumeName="kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-encryption-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068217 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2e38be5-1d33-4171-b27f-78a335f1590b" volumeName="kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068232 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f965b971-7e9a-4513-8450-b2b527609bd6" volumeName="kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068258 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a2aa6f3-f049-423a-a8f5-5d33fc214a7b" volumeName="kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-kube-api-access-p8rjs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068277 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ebbec674-ac49-422a-9548-5c29b15ad44d" volumeName="kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068302 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef122f26-bfae-44d2-a70a-8507b3b47332" volumeName="kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068315 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8be2517a-6f28-4289-a108-6e3345a1e246" volumeName="kubernetes.io/empty-dir/8be2517a-6f28-4289-a108-6e3345a1e246-snapshots" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068330 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a320d845-3a5d-4027-a765-f0b2dc07f9de" volumeName="kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068349 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef122f26-bfae-44d2-a70a-8507b3b47332" volumeName="kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068365 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92bd7735-8e3c-43bb-b543-03e6e6c5142a" volumeName="kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068389 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba020e0-1728-4e56-9618-d0ec3d9126eb" volumeName="kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068403 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ec3050d-8e6f-466a-995a-f78270408a85" volumeName="kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068441 32968 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8972b380-8f87-4b73-8f95-440d34d03884" volumeName="kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls" seLinuxMountContext="" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068454 32968 reconstruct.go:97] "Volume reconstruction finished" Mar 09 16:46:14.069681 master-0 kubenswrapper[32968]: I0309 16:46:14.068464 32968 reconciler.go:26] "Reconciler: start to sync state" Mar 09 16:46:14.079153 master-0 kubenswrapper[32968]: I0309 16:46:14.072565 32968 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 09 16:46:14.080570 master-0 kubenswrapper[32968]: I0309 16:46:14.080278 32968 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 09 16:46:14.084549 master-0 kubenswrapper[32968]: I0309 16:46:14.083458 32968 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 09 16:46:14.084549 master-0 kubenswrapper[32968]: I0309 16:46:14.083552 32968 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 09 16:46:14.084549 master-0 kubenswrapper[32968]: I0309 16:46:14.083578 32968 kubelet.go:2335] "Starting kubelet main sync loop" Mar 09 16:46:14.084549 master-0 kubenswrapper[32968]: E0309 16:46:14.083692 32968 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 09 16:46:14.087855 master-0 kubenswrapper[32968]: I0309 16:46:14.087789 32968 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 09 16:46:14.098790 master-0 kubenswrapper[32968]: I0309 16:46:14.098713 32968 generic.go:334] "Generic (PLEG): container finished" podID="5565c060-5952-4e85-8873-18bb80663924" containerID="dda1c1f36a6b6d9ac75b2bd00d887fa58cc2391c73527d2f8cbd81621d10c3e4" exitCode=0 Mar 09 16:46:14.103588 master-0 kubenswrapper[32968]: I0309 16:46:14.103536 32968 generic.go:334] "Generic (PLEG): container finished" podID="5b9030c9-7f5f-4e54-ae93-140469e3558b" containerID="66330a4bd334b8d1827e4db59cc4dd96a4c0efbd28a98ca757e4b3ea6788abd7" exitCode=0 Mar 09 16:46:14.109004 master-0 kubenswrapper[32968]: I0309 16:46:14.108957 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 09 16:46:14.109825 master-0 kubenswrapper[32968]: I0309 16:46:14.109481 32968 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482" exitCode=1 Mar 09 16:46:14.109825 master-0 kubenswrapper[32968]: I0309 16:46:14.109518 32968 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="3e490c33eb237e7928c7acb6c95a66dd05db37a72075c92f51066f56e730d5ab" exitCode=0 Mar 09 16:46:14.113685 master-0 kubenswrapper[32968]: I0309 16:46:14.113614 32968 generic.go:334] "Generic (PLEG): container finished" podID="be856881-2ceb-4803-8330-4a27ad8b1937" containerID="91eeee0f78c7370b1376450e55648943d546edf775431451383fe45a76895603" exitCode=0 Mar 09 16:46:14.113685 master-0 kubenswrapper[32968]: I0309 16:46:14.113655 32968 generic.go:334] "Generic (PLEG): container finished" podID="be856881-2ceb-4803-8330-4a27ad8b1937" containerID="b8b2f1d085aa9dfc2fab38e228753a6c99a8279f2a4596b733cb32f506c3c80e" exitCode=0 Mar 09 16:46:14.119668 master-0 kubenswrapper[32968]: I0309 16:46:14.119606 32968 generic.go:334] "Generic (PLEG): container finished" podID="8972b380-8f87-4b73-8f95-440d34d03884" containerID="478050fc5a610db3a7ffbb70974c16fcbc1a3e86ff4bd2cba7f1c2f94f7b4a39" exitCode=0 Mar 09 16:46:14.129530 master-0 kubenswrapper[32968]: I0309 16:46:14.129396 32968 generic.go:334] "Generic (PLEG): container finished" podID="d6b4992e-50f3-473c-aa83-ed35569ba307" containerID="81a061ad8b3b8276fdddd4547781d1739b9b814b6efb0c8aa846322d762aeea4" exitCode=0 Mar 09 16:46:14.145284 master-0 kubenswrapper[32968]: I0309 16:46:14.145223 32968 generic.go:334] "Generic (PLEG): container finished" podID="34a4491c-12cc-4531-ad3e-246e93ed7842" containerID="49dd8e161cea6212329f1712e1bf4a0806751557004321c54967d70157f3883b" exitCode=0 Mar 09 16:46:14.147780 master-0 kubenswrapper[32968]: I0309 16:46:14.147744 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1e5298b1-ccde-4c18-8cdb-f415a4842f75/installer/0.log" Mar 09 16:46:14.147780 master-0 kubenswrapper[32968]: I0309 16:46:14.147782 32968 generic.go:334] "Generic (PLEG): container finished" podID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerID="99a339c5f3968e16e82464c06f5f8bce759eee7e72f76870e9bcaf5b40dfae4f" exitCode=1 Mar 09 16:46:14.150477 master-0 kubenswrapper[32968]: I0309 16:46:14.150378 32968 generic.go:334] "Generic (PLEG): container finished" podID="8be2517a-6f28-4289-a108-6e3345a1e246" containerID="849641fef697929d82e47cd74e196c87b6f13e825237b99e39d16fe99de91e48" exitCode=0 Mar 09 16:46:14.153776 master-0 kubenswrapper[32968]: I0309 16:46:14.153709 32968 generic.go:334] "Generic (PLEG): container finished" podID="2e765395-7c6b-4cba-9a5a-37ba888722bb" containerID="1765d222fa51dc975cebdd1bdcaa4ce3c6b31334b8d1330af7de3940a2e5ca59" exitCode=0 Mar 09 16:46:14.156358 master-0 kubenswrapper[32968]: I0309 16:46:14.156311 32968 generic.go:334] "Generic (PLEG): container finished" podID="e4895f22-8fcd-4ace-96d8-bc2e18a67891" containerID="127fddf033d016698d708311f1ce4a751f3a2f860d40130a5519cb0b6938e0a1" exitCode=0 Mar 09 16:46:14.159520 master-0 kubenswrapper[32968]: I0309 16:46:14.159474 32968 generic.go:334] "Generic (PLEG): container finished" podID="eaf7dea5-9848-41f0-bf0b-ec70ec0380f1" containerID="cf28483378cea782ea700907bc68169878c403e836eb639a2889f087184ba71c" exitCode=0 Mar 09 16:46:14.165080 master-0 kubenswrapper[32968]: I0309 16:46:14.165038 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jzjhh_8d1829b3-643f-4f79-b6de-ae6ca5e78d4a/cluster-autoscaler-operator/0.log" Mar 09 16:46:14.165673 master-0 kubenswrapper[32968]: I0309 16:46:14.165554 32968 generic.go:334] "Generic (PLEG): container finished" podID="8d1829b3-643f-4f79-b6de-ae6ca5e78d4a" containerID="e8cb30c90125a1e3b3eb6f6752eb090667969ca7a1ad05a2f50043a22d1558b3" exitCode=255 Mar 09 16:46:14.169509 master-0 kubenswrapper[32968]: I0309 16:46:14.169476 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-p27tf_fa7f88a3-9845-49a3-a108-d524df592961/cluster-baremetal-operator/1.log" Mar 09 16:46:14.169903 master-0 kubenswrapper[32968]: I0309 16:46:14.169862 32968 generic.go:334] "Generic (PLEG): container finished" podID="fa7f88a3-9845-49a3-a108-d524df592961" containerID="5e7be62db7c2ebff5b66de7a7333b7d5e3cfc65957eae64bbca9ae219287c419" exitCode=1 Mar 09 16:46:14.173223 master-0 kubenswrapper[32968]: I0309 16:46:14.173192 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-tnbvb_c72e89f0-37ad-4515-89ba-ba1f52ba61f0/manager/0.log" Mar 09 16:46:14.173346 master-0 kubenswrapper[32968]: I0309 16:46:14.173232 32968 generic.go:334] "Generic (PLEG): container finished" podID="c72e89f0-37ad-4515-89ba-ba1f52ba61f0" containerID="eb0d4a5cd6b917ab3136d6670a91daed3539d6022e53b4e8f77735bc48ef873e" exitCode=1 Mar 09 16:46:14.175336 master-0 kubenswrapper[32968]: I0309 16:46:14.175306 32968 generic.go:334] "Generic (PLEG): container finished" podID="631f2bdf-2ed4-4315-98c3-c5a538d0aec3" containerID="ec2bd4079a912677c69adce5f15ccbeec93411cab07eef7010dd35a99bc07993" exitCode=0 Mar 09 16:46:14.178351 master-0 kubenswrapper[32968]: I0309 16:46:14.178314 32968 generic.go:334] "Generic (PLEG): container finished" podID="a62ba179-443d-424f-8cff-c75677e8cd5c" containerID="556fa937e7c3581b8c9b14e4926a7f4f60005bc952c23b42c146238b8e0e37d0" exitCode=0 Mar 09 16:46:14.183930 master-0 kubenswrapper[32968]: E0309 16:46:14.183763 32968 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 09 16:46:14.189167 master-0 kubenswrapper[32968]: I0309 16:46:14.189098 32968 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="f92b3bf64fc4165da416ac63f159e2be71d6527248ee0c28520165449adf5e4e" exitCode=0 Mar 09 16:46:14.189167 master-0 kubenswrapper[32968]: I0309 16:46:14.189161 32968 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="db91761f4ed69865df84925e7d692b45a5d00ca5d8cda47d3e02e2821fc11818" exitCode=0 Mar 09 16:46:14.189167 master-0 kubenswrapper[32968]: I0309 16:46:14.189172 32968 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="0e4dffbedd2651da68c4f09131df95460c21cf12adecaf4ed6c71f35a722b888" exitCode=0 Mar 09 16:46:14.189379 master-0 kubenswrapper[32968]: I0309 16:46:14.189185 32968 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="1c8c260da059200c19ff4508a0a4e27c1306ddf0f97c62b30fb7ed75be818372" exitCode=0 Mar 09 16:46:14.189379 master-0 kubenswrapper[32968]: I0309 16:46:14.189198 32968 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="e85c846d70b2880d50adc9dc310cb9743473eb6e96f2c0617b7d1adfb1817ac6" exitCode=0 Mar 09 16:46:14.189379 master-0 kubenswrapper[32968]: I0309 16:46:14.189225 32968 generic.go:334] "Generic (PLEG): container finished" podID="1ba020e0-1728-4e56-9618-d0ec3d9126eb" containerID="e18e252fd560cea1fe0cd7cc5f8a186dd08bab19f2d2e38f70e4a77bd4ec31c0" exitCode=0 Mar 09 16:46:14.192509 master-0 kubenswrapper[32968]: I0309 16:46:14.192461 32968 generic.go:334] "Generic (PLEG): container finished" podID="e2e38be5-1d33-4171-b27f-78a335f1590b" containerID="26536dc0c3eb884535f611edd83aab852a51eeb18c5af26fe55fde4610066f56" exitCode=0 Mar 09 16:46:14.204028 master-0 kubenswrapper[32968]: I0309 16:46:14.203998 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-4qg6v_a6cd9347-eec9-4549-9de4-6033112634ce/machine-api-operator/0.log" Mar 09 16:46:14.204894 master-0 kubenswrapper[32968]: I0309 16:46:14.204854 32968 generic.go:334] "Generic (PLEG): container finished" podID="a6cd9347-eec9-4549-9de4-6033112634ce" containerID="4a72ada443de84c13a8cbe47843e972a9ed55f3d914623df43cbb70dacd90962" exitCode=255 Mar 09 16:46:14.207219 master-0 kubenswrapper[32968]: I0309 16:46:14.207191 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/5.log" Mar 09 16:46:14.208163 master-0 kubenswrapper[32968]: I0309 16:46:14.208121 32968 generic.go:334] "Generic (PLEG): container finished" podID="f606b775-bf22-4d64-abb4-8e0e24ddb5cd" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" exitCode=1 Mar 09 16:46:14.226133 master-0 kubenswrapper[32968]: I0309 16:46:14.226090 32968 generic.go:334] "Generic (PLEG): container finished" podID="d2d3c20a-f92e-433b-9fbc-b667b7bcf175" containerID="b1c16a3899be6493dfcbe845944c02e0cb586d0232ff82e821db925b84a7b8fd" exitCode=0 Mar 09 16:46:14.260542 master-0 kubenswrapper[32968]: I0309 16:46:14.260471 32968 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497" exitCode=0 Mar 09 16:46:14.264001 master-0 kubenswrapper[32968]: I0309 16:46:14.263931 32968 generic.go:334] "Generic (PLEG): container finished" podID="a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a" containerID="6e9c4ef8e54a1ddaaeace68d16cbf279e55f0b1084e638b1cbf0208c30f75c2d" exitCode=0 Mar 09 16:46:14.268554 master-0 kubenswrapper[32968]: I0309 16:46:14.268484 32968 generic.go:334] "Generic (PLEG): container finished" podID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerID="103d3eac07aecf0258cc2c832ca414dc5ada6722c47422884569884c3c3f57fc" exitCode=0 Mar 09 16:46:14.271360 master-0 kubenswrapper[32968]: I0309 16:46:14.271314 32968 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="0c53bd04ab08a6dcf8bec8933ab495e84121056b0c52db4cc518d1487933ea5c" exitCode=0 Mar 09 16:46:14.271360 master-0 kubenswrapper[32968]: I0309 16:46:14.271338 32968 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="b70b23fca99483b0715615a35d01f07d100fae855b33a57805678b96e0a4e1a2" exitCode=0 Mar 09 16:46:14.271360 master-0 kubenswrapper[32968]: I0309 16:46:14.271347 32968 generic.go:334] "Generic (PLEG): container finished" podID="1e97466a-7c33-4efb-a961-14024d913a21" containerID="d7922052b68455850c77125803bb69415764501411377535a0999663fe5a312c" exitCode=0 Mar 09 16:46:14.279278 master-0 kubenswrapper[32968]: I0309 16:46:14.279198 32968 generic.go:334] "Generic (PLEG): container finished" podID="8c93fb5d-373d-4473-99dd-50e4398bafbf" containerID="2ddc6aee7d8d1006c27dca4fe5b21a0e258f10014a5a8ed340c294e3e6bda574" exitCode=0 Mar 09 16:46:14.287128 master-0 kubenswrapper[32968]: I0309 16:46:14.287069 32968 generic.go:334] "Generic (PLEG): container finished" podID="1da6f189-535a-4bf1-bbdb-758327651ae2" containerID="32286fc29ff0c774f7955c0ba49c91530fb15cf50845d1f7c12e2c8a6cdabfca" exitCode=0 Mar 09 16:46:14.287128 master-0 kubenswrapper[32968]: I0309 16:46:14.287122 32968 generic.go:334] "Generic (PLEG): container finished" podID="1da6f189-535a-4bf1-bbdb-758327651ae2" containerID="182df7a2500961e13e750c1e7666f2ebae9c039f790cc286ba67b25badf99579" exitCode=0 Mar 09 16:46:14.290375 master-0 kubenswrapper[32968]: I0309 16:46:14.290298 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-cvdzq_357570a4-f69b-4970-9b6f-fe06fc4c2f90/control-plane-machine-set-operator/0.log" Mar 09 16:46:14.290490 master-0 kubenswrapper[32968]: I0309 16:46:14.290402 32968 generic.go:334] "Generic (PLEG): container finished" podID="357570a4-f69b-4970-9b6f-fe06fc4c2f90" containerID="da01301d90c8ec36dd26e650eefd6003d2c0b759242bb4c2d47a570d6b83fec7" exitCode=1 Mar 09 16:46:14.295615 master-0 kubenswrapper[32968]: I0309 16:46:14.295583 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_f4f44499-c673-4c73-8ee9-dcef8914ce14/installer/0.log" Mar 09 16:46:14.295710 master-0 kubenswrapper[32968]: I0309 16:46:14.295646 32968 generic.go:334] "Generic (PLEG): container finished" podID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerID="e31e101fae28ad5c7e22332114d10cb8955a646e181d2af78e8c1a0573c9de55" exitCode=1 Mar 09 16:46:14.303173 master-0 kubenswrapper[32968]: I0309 16:46:14.303114 32968 generic.go:334] "Generic (PLEG): container finished" podID="6d47955b-b85c-4137-9dea-ff0c20d5ab77" containerID="c0b6c146623a62ab0a5823c85168f8b6cd4a93ec0368a37111e0616c32e8f226" exitCode=0 Mar 09 16:46:14.313647 master-0 kubenswrapper[32968]: I0309 16:46:14.313606 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_963633a2-3f9d-4b82-9e53-d749fa52bf8e/installer/0.log" Mar 09 16:46:14.313902 master-0 kubenswrapper[32968]: I0309 16:46:14.313665 32968 generic.go:334] "Generic (PLEG): container finished" podID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerID="d41d86bd25e4bbee52e08006f2bc72adad98a14d24d258528deb873f333249a6" exitCode=1 Mar 09 16:46:14.318160 master-0 kubenswrapper[32968]: I0309 16:46:14.317219 32968 generic.go:334] "Generic (PLEG): container finished" podID="217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8" containerID="d307ffcff5003265477b28e1b3fffae55393c2ce9ccdbb4d1fcf4602c47a75a3" exitCode=0 Mar 09 16:46:14.323470 master-0 kubenswrapper[32968]: I0309 16:46:14.323389 32968 generic.go:334] "Generic (PLEG): container finished" podID="d6912539-9b06-4e2c-b6a8-155df31147f2" containerID="cd7efe315849cdb3199a98f6f5c36f77f4fa9f5957ff9a8e14c0814b556fdc59" exitCode=0 Mar 09 16:46:14.325462 master-0 kubenswrapper[32968]: I0309 16:46:14.325393 32968 generic.go:334] "Generic (PLEG): container finished" podID="a8139a33-a597-4038-9bb4-183e72f498b4" containerID="f045963e70da23fa859bf7a0a6d7963e8dbb9e83018d8e030eee264ed97fa82a" exitCode=0 Mar 09 16:46:14.332864 master-0 kubenswrapper[32968]: I0309 16:46:14.332764 32968 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239" exitCode=1 Mar 09 16:46:14.340540 master-0 kubenswrapper[32968]: I0309 16:46:14.340465 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-xrgml_4a2aa6f3-f049-423a-a8f5-5d33fc214a7b/manager/0.log" Mar 09 16:46:14.341563 master-0 kubenswrapper[32968]: I0309 16:46:14.341467 32968 generic.go:334] "Generic (PLEG): container finished" podID="4a2aa6f3-f049-423a-a8f5-5d33fc214a7b" containerID="4fc5ebe625ed54c3d67f7a4689964a54c61c83f3612ec773524ffd6c73856293" exitCode=1 Mar 09 16:46:14.348554 master-0 kubenswrapper[32968]: I0309 16:46:14.348453 32968 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="9fcbc01d5f4782d9a43018a868b466e4526448de43e3cfccab2380f32946c687" exitCode=0 Mar 09 16:46:14.348554 master-0 kubenswrapper[32968]: I0309 16:46:14.348503 32968 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="3c0f8c5cb67d0f971149cd26dcfae78f198a52498c78f29a7fa53e12c2f891cd" exitCode=0 Mar 09 16:46:14.348554 master-0 kubenswrapper[32968]: I0309 16:46:14.348513 32968 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="626cde970cdf775bf13812ca4ee6f26bea7e402f4efa0e9b555e7bcc797f2635" exitCode=0 Mar 09 16:46:14.351811 master-0 kubenswrapper[32968]: I0309 16:46:14.351604 32968 generic.go:334] "Generic (PLEG): container finished" podID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerID="416bfbec5030b68d4b4837b781967c573c06ae0b5142f97eb8ad1a431a641798" exitCode=0 Mar 09 16:46:14.359695 master-0 kubenswrapper[32968]: I0309 16:46:14.359384 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_3a8a48b1-d4a9-48fb-912e-2f793a6d8478/installer/0.log" Mar 09 16:46:14.359695 master-0 kubenswrapper[32968]: I0309 16:46:14.359467 32968 generic.go:334] "Generic (PLEG): container finished" podID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerID="1f3ede07b96bf06c243e7982afd5fe4a072e8a3d04eb6bffe1b7a50cca581cf9" exitCode=1 Mar 09 16:46:14.363877 master-0 kubenswrapper[32968]: I0309 16:46:14.363803 32968 generic.go:334] "Generic (PLEG): container finished" podID="af4aa8d4-09e1-4589-b7bf-885617a11337" containerID="f2698e39e3b5a035604353ee09cee0739a68806bc558360103357b0dbe104e2f" exitCode=0 Mar 09 16:46:14.368094 master-0 kubenswrapper[32968]: I0309 16:46:14.368037 32968 generic.go:334] "Generic (PLEG): container finished" podID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerID="8f1a1e060987b820e153c9d0c33ec719e219b362f2873a0c12439e503198da64" exitCode=0 Mar 09 16:46:14.372760 master-0 kubenswrapper[32968]: I0309 16:46:14.372706 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-nqwd2_60e07bf5-933c-4ff6-9a1a-2fd05392c8e9/approver/1.log" Mar 09 16:46:14.373232 master-0 kubenswrapper[32968]: I0309 16:46:14.373196 32968 generic.go:334] "Generic (PLEG): container finished" podID="60e07bf5-933c-4ff6-9a1a-2fd05392c8e9" containerID="13f8ce747ae94aa028643a0d90bae20ae130da211dc31135e5f8daffa80a000f" exitCode=1 Mar 09 16:46:14.379251 master-0 kubenswrapper[32968]: I0309 16:46:14.379169 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_6d95c7ed-e3ea-4383-b083-1df5df078f1c/installer/0.log" Mar 09 16:46:14.379251 master-0 kubenswrapper[32968]: I0309 16:46:14.379239 32968 generic.go:334] "Generic (PLEG): container finished" podID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerID="8de19850c9308d09c5cd12077a0d9362d507f0d6192f1e12c63ed63d09fea438" exitCode=1 Mar 09 16:46:14.383911 master-0 kubenswrapper[32968]: E0309 16:46:14.383837 32968 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 09 16:46:14.385861 master-0 kubenswrapper[32968]: I0309 16:46:14.385834 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-fqwtv_f965b971-7e9a-4513-8450-b2b527609bd6/package-server-manager/0.log" Mar 09 16:46:14.386463 master-0 kubenswrapper[32968]: I0309 16:46:14.386366 32968 generic.go:334] "Generic (PLEG): container finished" podID="f965b971-7e9a-4513-8450-b2b527609bd6" containerID="6d5f471d38ab26de2789bb7383ccfd1af1a0996fc7de4e1ac556541f152b9d74" exitCode=1 Mar 09 16:46:14.392158 master-0 kubenswrapper[32968]: I0309 16:46:14.392104 32968 generic.go:334] "Generic (PLEG): container finished" podID="b9fc9e7d-652c-4063-9cdb-358f58cae29a" containerID="9f1f79ee7ed70eccdd62d45b2f4106c0429d123cf8b355c716fdcb468ee74764" exitCode=0 Mar 09 16:46:14.402836 master-0 kubenswrapper[32968]: I0309 16:46:14.402717 32968 generic.go:334] "Generic (PLEG): container finished" podID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerID="34d69e01c0df1a8808ea1e61ee678a2f4eb359f9a66a8c80ee688b834fc7aa8b" exitCode=0 Mar 09 16:46:14.406721 master-0 kubenswrapper[32968]: I0309 16:46:14.406659 32968 generic.go:334] "Generic (PLEG): container finished" podID="6cf9eae5-38bc-48fa-8339-d0751bb18e8c" containerID="5d8c100b8bc3cd727e168a74c2e48d870e8a9516215f22c217ef9c223c8bfc22" exitCode=0 Mar 09 16:46:14.424141 master-0 kubenswrapper[32968]: I0309 16:46:14.423806 32968 generic.go:334] "Generic (PLEG): container finished" podID="3a612208-f777-486f-9dde-048b2d898c7f" containerID="7559e3794c2b375f42338baad89cc8a6296746d7de572bec45d4f7ebb08433c6" exitCode=0 Mar 09 16:46:14.427830 master-0 kubenswrapper[32968]: I0309 16:46:14.427780 32968 generic.go:334] "Generic (PLEG): container finished" podID="3745c679-2ea9-4382-9270-4d3fbbaaf296" containerID="c375602309b4389668beef44b0297110b18bbf2efc79b2919215e7134a14a3e3" exitCode=0 Mar 09 16:46:14.427830 master-0 kubenswrapper[32968]: I0309 16:46:14.427826 32968 generic.go:334] "Generic (PLEG): container finished" podID="3745c679-2ea9-4382-9270-4d3fbbaaf296" containerID="38bf4a179e73486d5ae4aba2338c68d5699149ac664abb92d0a252b9049f8f76" exitCode=0 Mar 09 16:46:14.433527 master-0 kubenswrapper[32968]: I0309 16:46:14.432922 32968 generic.go:334] "Generic (PLEG): container finished" podID="aec186fc-aead-47fb-a7e1-8c9325897c47" containerID="c4eb68e7264550f4ffbefbb8ac663e749aa15295f8af2d3fc21d82134f75fd3a" exitCode=0 Mar 09 16:46:14.433527 master-0 kubenswrapper[32968]: I0309 16:46:14.432955 32968 generic.go:334] "Generic (PLEG): container finished" podID="aec186fc-aead-47fb-a7e1-8c9325897c47" containerID="076a8011760cf87704cccc794f400077e346aa9939d01683ec7b3535a6cd3a0f" exitCode=0 Mar 09 16:46:14.438806 master-0 kubenswrapper[32968]: I0309 16:46:14.438699 32968 generic.go:334] "Generic (PLEG): container finished" podID="797303d2-6d31-42f6-a1a4-c894509fba00" containerID="0bd8a00ef7113d3a7bd5dd2884b67a8d73e4a8ff56a6f8e02309ba516f2a9770" exitCode=0 Mar 09 16:46:14.443622 master-0 kubenswrapper[32968]: I0309 16:46:14.443593 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/config-sync-controllers/0.log" Mar 09 16:46:14.444300 master-0 kubenswrapper[32968]: I0309 16:46:14.444281 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-zctw6_ea34ff7e-27fa-4c26-acc0-ec551985eb76/cluster-cloud-controller-manager/0.log" Mar 09 16:46:14.444357 master-0 kubenswrapper[32968]: I0309 16:46:14.444327 32968 generic.go:334] "Generic (PLEG): container finished" podID="ea34ff7e-27fa-4c26-acc0-ec551985eb76" containerID="39d1c81df8c0e375db5e92a2da393b888f722383ebb7782e3b3f53c06fee366b" exitCode=1 Mar 09 16:46:14.444357 master-0 kubenswrapper[32968]: I0309 16:46:14.444352 32968 generic.go:334] "Generic (PLEG): container finished" podID="ea34ff7e-27fa-4c26-acc0-ec551985eb76" containerID="cd71269592a701160cbe606bc3b5a764b96e0af9d702d7660f9fc5b18a628065" exitCode=1 Mar 09 16:46:14.448054 master-0 kubenswrapper[32968]: I0309 16:46:14.448014 32968 generic.go:334] "Generic (PLEG): container finished" podID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerID="58ca4bfd8d3d92cf6b0638eb596cecb093134580ce5c529622e4707ab6f67862" exitCode=0 Mar 09 16:46:14.452583 master-0 kubenswrapper[32968]: I0309 16:46:14.452532 32968 generic.go:334] "Generic (PLEG): container finished" podID="166fdeb9-c79f-4d99-8a6b-3f5c43398e9d" containerID="5a53068d3aa0add7405bb4afae02f9c31d2802806c126fb434c8dcf05fc615e2" exitCode=0 Mar 09 16:46:14.454950 master-0 kubenswrapper[32968]: I0309 16:46:14.454834 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-f594m_57036838-9f42-4ea1-a5c9-77f820cc22c9/snapshot-controller/3.log" Mar 09 16:46:14.454950 master-0 kubenswrapper[32968]: I0309 16:46:14.454886 32968 generic.go:334] "Generic (PLEG): container finished" podID="57036838-9f42-4ea1-a5c9-77f820cc22c9" containerID="76f95d493c01558a16c7486e09fc1848aa9ad23e94142ae23293a88b2d8cd6fd" exitCode=1 Mar 09 16:46:14.456933 master-0 kubenswrapper[32968]: I0309 16:46:14.456890 32968 generic.go:334] "Generic (PLEG): container finished" podID="4320d00b-9add-4224-9632-d8422fec5b0b" containerID="2ef11d86aa2070868cc06b6da364e1a472811e4f000a136a0ce2bb7d159b1085" exitCode=0 Mar 09 16:46:14.462307 master-0 kubenswrapper[32968]: I0309 16:46:14.462261 32968 generic.go:334] "Generic (PLEG): container finished" podID="6c4dfdcc-e182-4831-98e4-1eedb069bcf6" containerID="0890855b3b5026503838ed97808495935321e600acd88d8055621af6b2d87521" exitCode=0 Mar 09 16:46:14.464271 master-0 kubenswrapper[32968]: I0309 16:46:14.464228 32968 generic.go:334] "Generic (PLEG): container finished" podID="696fcca2-df1a-491d-956d-1cfda1ee5e70" containerID="f0361f83355d67a2e316e3ff34c657a94b865183e5a166fa44ab20e7b17b6c77" exitCode=0 Mar 09 16:46:14.469822 master-0 kubenswrapper[32968]: I0309 16:46:14.469779 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-pfbvg_3ec3050d-8e6f-466a-995a-f78270408a85/machine-approver-controller/0.log" Mar 09 16:46:14.470502 master-0 kubenswrapper[32968]: I0309 16:46:14.470465 32968 generic.go:334] "Generic (PLEG): container finished" podID="3ec3050d-8e6f-466a-995a-f78270408a85" containerID="2045c91c077228b5fc52cbacb88317be3538b9cb4ff34112c6659345b8d1fd77" exitCode=255 Mar 09 16:46:14.475847 master-0 kubenswrapper[32968]: I0309 16:46:14.475804 32968 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="51cc97980a013ef784c30d027db741202e1e61692ca828907c9b9adb40652a56" exitCode=0 Mar 09 16:46:14.475847 master-0 kubenswrapper[32968]: I0309 16:46:14.475831 32968 generic.go:334] "Generic (PLEG): container finished" podID="457f42a7-f14c-4d61-a87a-bc1ed422feed" containerID="cd074429ed45f5a8693a7e2dec95a69a0356de57104bf51c86da0531be3d00f3" exitCode=0 Mar 09 16:46:14.483272 master-0 kubenswrapper[32968]: I0309 16:46:14.483253 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-gglsc_dc732d23-37bc-41c2-9f9b-333ba517c1f8/cluster-node-tuning-operator/0.log" Mar 09 16:46:14.483463 master-0 kubenswrapper[32968]: I0309 16:46:14.483442 32968 generic.go:334] "Generic (PLEG): container finished" podID="dc732d23-37bc-41c2-9f9b-333ba517c1f8" containerID="25a7ab145b0763001053c074ce2286add5df023f3e9455ff678697bf2aec9346" exitCode=1 Mar 09 16:46:14.615823 master-0 kubenswrapper[32968]: I0309 16:46:14.615666 32968 manager.go:324] Recovery completed Mar 09 16:46:14.728916 master-0 kubenswrapper[32968]: I0309 16:46:14.728842 32968 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 09 16:46:14.728916 master-0 kubenswrapper[32968]: I0309 16:46:14.728886 32968 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 09 16:46:14.728916 master-0 kubenswrapper[32968]: I0309 16:46:14.728929 32968 state_mem.go:36] "Initialized new in-memory state store" Mar 09 16:46:14.729241 master-0 kubenswrapper[32968]: I0309 16:46:14.729155 32968 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 09 16:46:14.729241 master-0 kubenswrapper[32968]: I0309 16:46:14.729190 32968 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 09 16:46:14.729241 master-0 kubenswrapper[32968]: I0309 16:46:14.729216 32968 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 09 16:46:14.729241 master-0 kubenswrapper[32968]: I0309 16:46:14.729222 32968 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 09 16:46:14.729241 master-0 kubenswrapper[32968]: I0309 16:46:14.729229 32968 policy_none.go:49] "None policy: Start" Mar 09 16:46:14.734410 master-0 kubenswrapper[32968]: I0309 16:46:14.734354 32968 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 09 16:46:14.734410 master-0 kubenswrapper[32968]: I0309 16:46:14.734436 32968 state_mem.go:35] "Initializing new in-memory state store" Mar 09 16:46:14.736213 master-0 kubenswrapper[32968]: I0309 16:46:14.734783 32968 state_mem.go:75] "Updated machine memory state" Mar 09 16:46:14.736213 master-0 kubenswrapper[32968]: I0309 16:46:14.734794 32968 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 09 16:46:14.750292 master-0 kubenswrapper[32968]: I0309 16:46:14.750245 32968 manager.go:334] "Starting Device Plugin manager" Mar 09 16:46:14.750292 master-0 kubenswrapper[32968]: I0309 16:46:14.750344 32968 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 09 16:46:14.750292 master-0 kubenswrapper[32968]: I0309 16:46:14.750370 32968 server.go:79] "Starting device plugin registration server" Mar 09 16:46:14.751443 master-0 kubenswrapper[32968]: I0309 16:46:14.751131 32968 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 09 16:46:14.751443 master-0 kubenswrapper[32968]: I0309 16:46:14.751145 32968 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 09 16:46:14.752262 master-0 kubenswrapper[32968]: I0309 16:46:14.751607 32968 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 09 16:46:14.752262 master-0 kubenswrapper[32968]: I0309 16:46:14.751882 32968 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 09 16:46:14.752262 master-0 kubenswrapper[32968]: I0309 16:46:14.751890 32968 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 09 16:46:14.785216 master-0 kubenswrapper[32968]: I0309 16:46:14.784009 32968 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:46:14.786643 master-0 kubenswrapper[32968]: I0309 16:46:14.786569 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"57f6bbbfcfb537c0879739b1547de923304fd0f8bd8f06701d29220990585d09"} Mar 09 16:46:14.786643 master-0 kubenswrapper[32968]: I0309 16:46:14.786644 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"858f200c4bed360a1ab5f669d9546aeb752644174af8db489164dd109cc84482"} Mar 09 16:46:14.786777 master-0 kubenswrapper[32968]: I0309 16:46:14.786660 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"3e490c33eb237e7928c7acb6c95a66dd05db37a72075c92f51066f56e730d5ab"} Mar 09 16:46:14.786777 master-0 kubenswrapper[32968]: I0309 16:46:14.786676 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"20d2cbfb13bb229d355b65787791abb03a6d8bc96edc2db80ab37b362f8bfafc"} Mar 09 16:46:14.786777 master-0 kubenswrapper[32968]: I0309 16:46:14.786730 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d52cc0028ff96f98ebd770e2dc5097b98be4fb121e8da758bffb026deac3d78" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786861 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786875 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786886 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786898 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786907 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786917 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerDied","Data":"00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786929 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"48512e02022680c9d90092634f0fc146","Type":"ContainerStarted","Data":"fb5e1e54ee68deb209059559d780923c2be6947b2af201282f1863c7921a006a"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.786975 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6aad7366b5d928e298e63637659b53b629387abc1091e57f92d82a0af1b251a" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787002 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a17ca23871f5fe009b94536b67c14ff0f31a8f935bd942c1dc6b58650ad3cee" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787033 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b142f4b016040c51452f30737a55d5afae72a9c5e2b5161cafa663238823b5" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787045 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"2d59ac76dc4be81acf3ade62baf431dad3208a3f0083ed9e5b09fbc150f0a9be"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787055 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787066 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787083 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d0837c89dd7d5c29cb3a16a4172f82ba252bd96283dd17c4859c983ffbc4a953"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787096 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"a81141219f32b726e278b6a94c2bf45a46404948e70612df477a68ae817250cb"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787107 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"9b13491263a5d4609f4ed6efa05d90c0afd38b93af0c6748cf255f4f0ae9a67f"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787117 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"add3696dadb79923d056772ab2d07a81596271dc33777dc0c6ae81fec3a9d5b4"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787129 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6eb340f9829999c7cc79c3d03f217ced767a38d4b0f77e9249276c39cb95fddd"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787141 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"9fcbc01d5f4782d9a43018a868b466e4526448de43e3cfccab2380f32946c687"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787157 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"3c0f8c5cb67d0f971149cd26dcfae78f198a52498c78f29a7fa53e12c2f891cd"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787170 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"626cde970cdf775bf13812ca4ee6f26bea7e402f4efa0e9b555e7bcc797f2635"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787181 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"36e19aa1b6ea9a33b0bd3d90bdae764e4eaffaf7d35f024f5dc33fac765da34c"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787195 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9901d0aaf4b1546909e7fc4c6fcee79bdbe51cd6dd0be1d8dfa8048b9232cb38" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787207 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba18e4cdbe1d3aa5ea0706e01029f58a950047a886c6fb433cb9a5f4e3e02f15" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787231 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf7607bf63c826880c277db5efe1d7b1c54664d8a874cf3cbfd77d87cef3162" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787333 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a571c4eb66ea627ef0709faafeeb737ffc1c33c5646cf333d981378d17a38c39" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787462 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24058654b06de5ea63d96463dcab2ce05518406a3d8c8aadd1a0e496b5a2c7ea" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787499 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3299202c28b8acf777efcf9fdf25fde3d2b0c3f7effed599dce85a012e3a3b40" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787629 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780ab87267d09a817c8af70d196d52705930bc50d893178b79a2f3daaac2986b" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787650 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb786c3ebfc5b302bbf77e532b601727b3659c5edd9e40f135a583f9877e73b6" Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787668 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"55b9dd03a97a7153346e305d1d756d1e7bf45a58d0547c62d3e8a40594f9dbaa"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787716 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"3c38b2115cd52d1efef54c2999128dc674a18b9803bfdcdba9d9e455d6aa049a"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787727 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"4cd8903e8e22ba82f42ce990c7d672d208e9b2502ddb3553b9e1798f91e13ece"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787738 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787749 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787799 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5"} Mar 09 16:46:14.788668 master-0 kubenswrapper[32968]: I0309 16:46:14.787813 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"3a18cac8a90d6913a6a0391d805cddc9","Type":"ContainerStarted","Data":"b43b8b247bcf7dd91e3dade29e3c0373e4989b5f279bccec521a6e0e7ca4f4e0"} Mar 09 16:46:14.851351 master-0 kubenswrapper[32968]: I0309 16:46:14.851272 32968 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:46:14.855531 master-0 kubenswrapper[32968]: I0309 16:46:14.855413 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:46:14.855643 master-0 kubenswrapper[32968]: I0309 16:46:14.855560 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:46:14.855643 master-0 kubenswrapper[32968]: I0309 16:46:14.855576 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:46:14.855877 master-0 kubenswrapper[32968]: I0309 16:46:14.855776 32968 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:46:14.859543 master-0 kubenswrapper[32968]: E0309 16:46:14.859467 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.859543 master-0 kubenswrapper[32968]: E0309 16:46:14.859488 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.860089 master-0 kubenswrapper[32968]: E0309 16:46:14.859561 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.860089 master-0 kubenswrapper[32968]: E0309 16:46:14.859765 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.860251 master-0 kubenswrapper[32968]: E0309 16:46:14.860105 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.860745 master-0 kubenswrapper[32968]: E0309 16:46:14.860705 32968 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 09 16:46:14.876608 master-0 kubenswrapper[32968]: I0309 16:46:14.876441 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.876608 master-0 kubenswrapper[32968]: I0309 16:46:14.876512 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.876965 master-0 kubenswrapper[32968]: I0309 16:46:14.876640 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.876965 master-0 kubenswrapper[32968]: I0309 16:46:14.876700 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.876965 master-0 kubenswrapper[32968]: I0309 16:46:14.876792 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.876965 master-0 kubenswrapper[32968]: I0309 16:46:14.876851 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.876965 master-0 kubenswrapper[32968]: I0309 16:46:14.876876 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.876965 master-0 kubenswrapper[32968]: I0309 16:46:14.876918 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:14.877229 master-0 kubenswrapper[32968]: I0309 16:46:14.876984 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.877229 master-0 kubenswrapper[32968]: I0309 16:46:14.877031 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.877229 master-0 kubenswrapper[32968]: I0309 16:46:14.877069 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.877229 master-0 kubenswrapper[32968]: I0309 16:46:14.877097 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.877229 master-0 kubenswrapper[32968]: I0309 16:46:14.877159 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.877229 master-0 kubenswrapper[32968]: I0309 16:46:14.877205 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:14.877565 master-0 kubenswrapper[32968]: I0309 16:46:14.877236 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.877565 master-0 kubenswrapper[32968]: I0309 16:46:14.877265 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.877565 master-0 kubenswrapper[32968]: I0309 16:46:14.877292 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.877565 master-0 kubenswrapper[32968]: I0309 16:46:14.877319 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.877565 master-0 kubenswrapper[32968]: I0309 16:46:14.877355 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.877565 master-0 kubenswrapper[32968]: I0309 16:46:14.877384 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.978679 master-0 kubenswrapper[32968]: I0309 16:46:14.978608 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.978679 master-0 kubenswrapper[32968]: I0309 16:46:14.978678 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.978847 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.978903 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.978953 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.978954 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.978985 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.978999 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979002 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979018 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979047 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979059 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979121 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979151 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979172 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979191 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979210 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979256 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979285 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979286 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979320 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979344 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979341 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979379 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979385 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979408 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979445 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.979410 master-0 kubenswrapper[32968]: I0309 16:46:14.979339 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979472 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979506 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979545 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979577 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979587 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979600 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979608 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"48512e02022680c9d90092634f0fc146\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979639 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979660 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979695 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979722 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:14.980344 master-0 kubenswrapper[32968]: I0309 16:46:14.979727 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:15.017051 master-0 kubenswrapper[32968]: I0309 16:46:15.016947 32968 apiserver.go:52] "Watching apiserver" Mar 09 16:46:15.039365 master-0 kubenswrapper[32968]: I0309 16:46:15.039297 32968 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 09 16:46:15.042637 master-0 kubenswrapper[32968]: I0309 16:46:15.042544 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9","openshift-multus/network-metrics-daemon-n7slb","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv","openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7","openshift-cluster-node-tuning-operator/tuned-fllqb","openshift-etcd/installer-2-master-0","openshift-ingress-canary/ingress-canary-nxtms","openshift-ingress/router-default-79f8cd6fdd-rvnwf","openshift-kube-controller-manager/installer-4-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/installer-6-master-0","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc","openshift-etcd/installer-1-master-0","openshift-insights/insights-operator-8f89dfddd-5fjz8","openshift-kube-apiserver/installer-2-master-0","openshift-kube-controller-manager/installer-1-master-0","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq","openshift-machine-config-operator/machine-config-server-7d5bx","kube-system/bootstrap-kube-scheduler-master-0","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw","openshift-marketplace/redhat-marketplace-zcvrg","openshift-monitoring/telemeter-client-d4f6dc665-658vm","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb","openshift-ovn-kubernetes/ovnkube-node-vwgwh","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7","openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9","openshift-controller-manager/controller-manager-5c5964c98f-tm4pb","openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv","openshift-multus/multus-admission-controller-7769569c45-jcsfw","openshift-network-operator/network-operator-7c649bf6d4-r82z7","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml","openshift-dns/node-resolver-kqtzc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf","openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb","openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6","openshift-kube-controller-manager/installer-3-master-0","openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv","openshift-monitoring/metrics-server-7c4558858-9rclt","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd","openshift-kube-apiserver/installer-4-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk","openshift-dns-operator/dns-operator-589895fbb7-6sknh","openshift-kube-apiserver/kube-apiserver-master-0","openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n","openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v","openshift-multus/multus-gfqq8","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc","openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v","openshift-machine-config-operator/machine-config-daemon-94s4v","openshift-monitoring/node-exporter-qjk4k","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-network-node-identity/network-node-identity-nqwd2","openshift-dns/dns-default-sj6x9","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp","openshift-marketplace/certified-operators-8gkw8","openshift-network-diagnostics/network-check-target-ncskk","openshift-network-operator/iptables-alerter-g4tdb","openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb","openshift-apiserver/apiserver-67495f79c-bcblv","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m","openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt","openshift-etcd/etcd-master-0","openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh","openshift-marketplace/redhat-operators-49bwx","openshift-service-ca/service-ca-84bfdbbb7f-6r6g2","assisted-installer/assisted-installer-controller-rdwtz","openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp","openshift-marketplace/community-operators-zrqjw","openshift-multus/multus-additional-cni-plugins-jkhls","openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr","openshift-ingress-operator/ingress-operator-677db989d6-xtmhw","openshift-kube-apiserver/installer-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd"] Mar 09 16:46:15.043099 master-0 kubenswrapper[32968]: I0309 16:46:15.043033 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rdwtz" Mar 09 16:46:15.047486 master-0 kubenswrapper[32968]: I0309 16:46:15.047122 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.047486 master-0 kubenswrapper[32968]: I0309 16:46:15.047294 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 09 16:46:15.052037 master-0 kubenswrapper[32968]: I0309 16:46:15.047557 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 09 16:46:15.052037 master-0 kubenswrapper[32968]: I0309 16:46:15.050631 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 09 16:46:15.052037 master-0 kubenswrapper[32968]: I0309 16:46:15.050960 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 09 16:46:15.055382 master-0 kubenswrapper[32968]: I0309 16:46:15.055343 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 09 16:46:15.055873 master-0 kubenswrapper[32968]: I0309 16:46:15.055848 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 09 16:46:15.056156 master-0 kubenswrapper[32968]: I0309 16:46:15.056138 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 09 16:46:15.056767 master-0 kubenswrapper[32968]: I0309 16:46:15.056670 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 09 16:46:15.057221 master-0 kubenswrapper[32968]: I0309 16:46:15.057158 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 09 16:46:15.057309 master-0 kubenswrapper[32968]: I0309 16:46:15.057285 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 09 16:46:15.057457 master-0 kubenswrapper[32968]: I0309 16:46:15.057412 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 09 16:46:15.057616 master-0 kubenswrapper[32968]: I0309 16:46:15.057581 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 09 16:46:15.057940 master-0 kubenswrapper[32968]: I0309 16:46:15.057886 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 09 16:46:15.058238 master-0 kubenswrapper[32968]: I0309 16:46:15.058083 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.058557 master-0 kubenswrapper[32968]: I0309 16:46:15.058256 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.058557 master-0 kubenswrapper[32968]: I0309 16:46:15.058351 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 09 16:46:15.058557 master-0 kubenswrapper[32968]: I0309 16:46:15.058550 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 09 16:46:15.058819 master-0 kubenswrapper[32968]: I0309 16:46:15.058739 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 09 16:46:15.058999 master-0 kubenswrapper[32968]: I0309 16:46:15.058901 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059047 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059173 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059366 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059459 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059510 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059551 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059635 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059867 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.059955 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.060218 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 09 16:46:15.060523 master-0 kubenswrapper[32968]: I0309 16:46:15.060296 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.061157 master-0 kubenswrapper[32968]: I0309 16:46:15.060930 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 09 16:46:15.061157 master-0 kubenswrapper[32968]: I0309 16:46:15.061029 32968 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:46:15.061157 master-0 kubenswrapper[32968]: I0309 16:46:15.061110 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:46:15.061157 master-0 kubenswrapper[32968]: I0309 16:46:15.059043 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 09 16:46:15.061631 master-0 kubenswrapper[32968]: I0309 16:46:15.061579 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 09 16:46:15.062485 master-0 kubenswrapper[32968]: I0309 16:46:15.062271 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 09 16:46:15.062954 master-0 kubenswrapper[32968]: I0309 16:46:15.062924 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 09 16:46:15.063406 master-0 kubenswrapper[32968]: I0309 16:46:15.063367 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.064819 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.065139 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.065601 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.065761 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.066202 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.066213 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:46:15.066252 master-0 kubenswrapper[32968]: I0309 16:46:15.066272 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 09 16:46:15.069091 master-0 kubenswrapper[32968]: I0309 16:46:15.068828 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 09 16:46:15.069091 master-0 kubenswrapper[32968]: I0309 16:46:15.068982 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 09 16:46:15.069091 master-0 kubenswrapper[32968]: I0309 16:46:15.069040 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:46:15.069772 master-0 kubenswrapper[32968]: I0309 16:46:15.069741 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 09 16:46:15.069908 master-0 kubenswrapper[32968]: I0309 16:46:15.069882 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 09 16:46:15.070009 master-0 kubenswrapper[32968]: I0309 16:46:15.069954 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 09 16:46:15.070109 master-0 kubenswrapper[32968]: I0309 16:46:15.070078 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 09 16:46:15.070157 master-0 kubenswrapper[32968]: I0309 16:46:15.070128 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 09 16:46:15.070157 master-0 kubenswrapper[32968]: I0309 16:46:15.070136 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 09 16:46:15.070325 master-0 kubenswrapper[32968]: I0309 16:46:15.070292 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.070325 master-0 kubenswrapper[32968]: I0309 16:46:15.070320 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.070529 master-0 kubenswrapper[32968]: I0309 16:46:15.070499 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.070969 master-0 kubenswrapper[32968]: I0309 16:46:15.070901 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 09 16:46:15.070969 master-0 kubenswrapper[32968]: I0309 16:46:15.070937 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 09 16:46:15.071042 master-0 kubenswrapper[32968]: I0309 16:46:15.071031 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 09 16:46:15.071143 master-0 kubenswrapper[32968]: I0309 16:46:15.070076 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 09 16:46:15.071213 master-0 kubenswrapper[32968]: I0309 16:46:15.071180 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 09 16:46:15.071344 master-0 kubenswrapper[32968]: I0309 16:46:15.071055 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 09 16:46:15.071732 master-0 kubenswrapper[32968]: I0309 16:46:15.071700 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 09 16:46:15.072597 master-0 kubenswrapper[32968]: I0309 16:46:15.072548 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:46:15.072984 master-0 kubenswrapper[32968]: I0309 16:46:15.072593 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:46:15.072984 master-0 kubenswrapper[32968]: I0309 16:46:15.072711 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:46:15.072984 master-0 kubenswrapper[32968]: I0309 16:46:15.072939 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:46:15.073163 master-0 kubenswrapper[32968]: I0309 16:46:15.073003 32968 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:46:15.073163 master-0 kubenswrapper[32968]: I0309 16:46:15.073071 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 09 16:46:15.073382 master-0 kubenswrapper[32968]: I0309 16:46:15.073254 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 09 16:46:15.073780 master-0 kubenswrapper[32968]: I0309 16:46:15.073725 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 09 16:46:15.074094 master-0 kubenswrapper[32968]: I0309 16:46:15.073872 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.074168 master-0 kubenswrapper[32968]: I0309 16:46:15.073901 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 09 16:46:15.074245 master-0 kubenswrapper[32968]: I0309 16:46:15.073952 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 09 16:46:15.075128 master-0 kubenswrapper[32968]: I0309 16:46:15.075089 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 09 16:46:15.075287 master-0 kubenswrapper[32968]: I0309 16:46:15.075247 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.079949 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080052 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080089 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080112 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080139 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080158 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080181 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080201 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080237 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080342 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080380 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080417 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080540 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080554 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080600 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080631 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080655 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080685 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080721 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080743 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080776 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080812 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080842 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080874 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080901 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080932 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.080970 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081003 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081035 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081067 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081089 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081520 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-config\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081635 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/004d1e93-2345-4e62-902c-33f9dbb0f397-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081665 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081884 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2e38be5-1d33-4171-b27f-78a335f1590b-serving-cert\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.081916 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-daemon-config\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082060 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082258 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/be86c85d-59b1-4279-8253-a998ca16cd4d-srv-cert\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082259 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082300 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082339 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082345 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082366 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082389 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082453 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082506 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082509 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082549 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082577 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082603 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082611 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082631 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082660 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082696 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082725 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082762 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082804 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082832 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082863 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082888 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082913 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082935 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.082987 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.083013 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.083038 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.084030 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.084510 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.084652 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-client\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.084986 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-metrics-tls\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085169 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085342 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc732d23-37bc-41c2-9f9b-333ba517c1f8-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085401 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-config\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085487 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085542 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085570 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085587 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085596 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085649 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085170 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa7f88a3-9845-49a3-a108-d524df592961-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085683 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085706 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085731 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085758 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085781 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085805 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085825 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085853 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085873 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085892 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085933 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-serving-cert\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085952 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085973 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085983 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/457f42a7-f14c-4d61-a87a-bc1ed422feed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086039 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e765395-7c6b-4cba-9a5a-37ba888722bb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.085988 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086121 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086170 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086202 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086230 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a612208-f777-486f-9dde-048b2d898c7f-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086290 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086439 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086207 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086505 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086578 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-config\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086652 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cni-binary-copy\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086680 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086727 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/df2ec8b2-02d7-40c4-ac20-32615d689697-cni-binary-copy\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086718 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086785 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086848 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086866 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086869 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/1e97466a-7c33-4efb-a961-14024d913a21-operand-assets\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086884 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086910 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a4491c-12cc-4531-ad3e-246e93ed7842-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.086924 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d15da434-241d-4a93-9ce3-f943d43bf2ce-srv-cert\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087230 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/004d1e93-2345-4e62-902c-33f9dbb0f397-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087235 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087319 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087355 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087381 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087384 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087402 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087404 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/457f42a7-f14c-4d61-a87a-bc1ed422feed-serving-cert\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087464 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087409 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087552 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087588 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087613 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087635 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087662 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087696 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087725 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:46:15.088656 master-0 kubenswrapper[32968]: I0309 16:46:15.087733 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/72739f4d-da25-493b-91ef-d2b64e9297dd-metrics-tls\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.087753 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.087831 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-config\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.087943 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088012 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088048 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088056 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-serving-cert\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088069 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088096 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088189 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088249 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088315 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088326 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a4491c-12cc-4531-ad3e-246e93ed7842-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088350 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088495 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088533 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088566 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088563 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088695 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-config\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088722 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5565c060-5952-4e85-8873-18bb80663924-metrics-tls\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088742 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a612208-f777-486f-9dde-048b2d898c7f-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088816 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088896 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088914 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e97466a-7c33-4efb-a961-14024d913a21-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088992 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.088994 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089031 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-etcd-ca\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089038 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089112 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089114 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089222 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089226 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089266 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089289 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089339 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6912539-9b06-4e2c-b6a8-155df31147f2-serving-cert\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089416 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6912539-9b06-4e2c-b6a8-155df31147f2-config\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089534 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.089535 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.090850 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc732d23-37bc-41c2-9f9b-333ba517c1f8-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.090873 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.090990 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.091300 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.091361 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa7f88a3-9845-49a3-a108-d524df592961-images\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.094391 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.097621 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ef122f26-bfae-44d2-a70a-8507b3b47332-metrics-certs\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.099076 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.099452 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.099742 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-config\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.100210 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f965b971-7e9a-4513-8450-b2b527609bd6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.100515 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e4895f22-8fcd-4ace-96d8-bc2e18a67891-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.101484 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.102404 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.103093 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e4895f22-8fcd-4ace-96d8-bc2e18a67891-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.104734 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.106501 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.106688 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.106719 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.106858 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.107551 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.110801 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e38be5-1d33-4171-b27f-78a335f1590b-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:15.111011 master-0 kubenswrapper[32968]: I0309 16:46:15.110847 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 09 16:46:15.114942 master-0 kubenswrapper[32968]: I0309 16:46:15.112168 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 09 16:46:15.114942 master-0 kubenswrapper[32968]: I0309 16:46:15.113020 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-trusted-ca\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:15.114942 master-0 kubenswrapper[32968]: I0309 16:46:15.113937 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 09 16:46:15.116267 master-0 kubenswrapper[32968]: I0309 16:46:15.116169 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e765395-7c6b-4cba-9a5a-37ba888722bb-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:15.123735 master-0 kubenswrapper[32968]: I0309 16:46:15.123679 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9030c9-7f5f-4e54-ae93-140469e3558b-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:15.125832 master-0 kubenswrapper[32968]: I0309 16:46:15.125588 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 09 16:46:15.146724 master-0 kubenswrapper[32968]: I0309 16:46:15.146570 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 09 16:46:15.150970 master-0 kubenswrapper[32968]: I0309 16:46:15.150918 32968 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 09 16:46:15.166477 master-0 kubenswrapper[32968]: I0309 16:46:15.166415 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 09 16:46:15.185374 master-0 kubenswrapper[32968]: I0309 16:46:15.185304 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 09 16:46:15.190841 master-0 kubenswrapper[32968]: I0309 16:46:15.190786 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkxg\" (UniqueName: \"kubernetes.io/projected/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-kube-api-access-4gkxg\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:15.191117 master-0 kubenswrapper[32968]: I0309 16:46:15.190929 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.191117 master-0 kubenswrapper[32968]: I0309 16:46:15.190956 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:15.191117 master-0 kubenswrapper[32968]: I0309 16:46:15.190976 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.191117 master-0 kubenswrapper[32968]: I0309 16:46:15.191002 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-client\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.191117 master-0 kubenswrapper[32968]: I0309 16:46:15.191021 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.191386 master-0 kubenswrapper[32968]: I0309 16:46:15.191338 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-trusted-ca-bundle\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.191386 master-0 kubenswrapper[32968]: I0309 16:46:15.191375 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-system-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.191494 master-0 kubenswrapper[32968]: I0309 16:46:15.191392 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhct\" (UniqueName: \"kubernetes.io/projected/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-api-access-jrhct\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.191494 master-0 kubenswrapper[32968]: I0309 16:46:15.191362 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.191560 master-0 kubenswrapper[32968]: I0309 16:46:15.191503 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:15.191560 master-0 kubenswrapper[32968]: I0309 16:46:15.191528 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.191753 master-0 kubenswrapper[32968]: I0309 16:46:15.191668 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.191826 master-0 kubenswrapper[32968]: I0309 16:46:15.191779 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.191826 master-0 kubenswrapper[32968]: I0309 16:46:15.191810 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlnq\" (UniqueName: \"kubernetes.io/projected/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-kube-api-access-dxlnq\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:15.191894 master-0 kubenswrapper[32968]: I0309 16:46:15.191837 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.191935 master-0 kubenswrapper[32968]: I0309 16:46:15.191869 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.191970 master-0 kubenswrapper[32968]: I0309 16:46:15.191928 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-multus-certs\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.191970 master-0 kubenswrapper[32968]: I0309 16:46:15.191951 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.191970 master-0 kubenswrapper[32968]: I0309 16:46:15.191967 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.192135 master-0 kubenswrapper[32968]: I0309 16:46:15.191992 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-hosts-file\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:46:15.192135 master-0 kubenswrapper[32968]: I0309 16:46:15.192048 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.192135 master-0 kubenswrapper[32968]: I0309 16:46:15.192076 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.192302 master-0 kubenswrapper[32968]: I0309 16:46:15.192274 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-client\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.192455 master-0 kubenswrapper[32968]: I0309 16:46:15.192434 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-tmp\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.192505 master-0 kubenswrapper[32968]: I0309 16:46:15.192466 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-wtmp\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.192545 master-0 kubenswrapper[32968]: I0309 16:46:15.192503 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:15.192633 master-0 kubenswrapper[32968]: I0309 16:46:15.192556 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2bk\" (UniqueName: \"kubernetes.io/projected/18f0164f-0875-4668-b155-df69e05e8ae0-kube-api-access-pq2bk\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:46:15.192633 master-0 kubenswrapper[32968]: I0309 16:46:15.192577 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.192633 master-0 kubenswrapper[32968]: I0309 16:46:15.192596 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.192633 master-0 kubenswrapper[32968]: I0309 16:46:15.192611 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-tmp\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.192810 master-0 kubenswrapper[32968]: I0309 16:46:15.192674 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:15.192810 master-0 kubenswrapper[32968]: I0309 16:46:15.192711 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:15.192810 master-0 kubenswrapper[32968]: I0309 16:46:15.192726 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-cni-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.192810 master-0 kubenswrapper[32968]: I0309 16:46:15.192734 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.192970 master-0 kubenswrapper[32968]: I0309 16:46:15.192914 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.192970 master-0 kubenswrapper[32968]: I0309 16:46:15.192945 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvfgw\" (UniqueName: \"kubernetes.io/projected/e5c4ccb0-f795-44bd-9bb4-baf84564c239-kube-api-access-cvfgw\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:15.193034 master-0 kubenswrapper[32968]: I0309 16:46:15.192968 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.193069 master-0 kubenswrapper[32968]: I0309 16:46:15.193054 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.193106 master-0 kubenswrapper[32968]: I0309 16:46:15.193058 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-system-cni-dir\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.193160 master-0 kubenswrapper[32968]: I0309 16:46:15.193078 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.193251 master-0 kubenswrapper[32968]: I0309 16:46:15.193220 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-root\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.193593 master-0 kubenswrapper[32968]: I0309 16:46:15.193472 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjf4p\" (UniqueName: \"kubernetes.io/projected/9482fb93-c223-45ee-bde8-7667303270b6-kube-api-access-qjf4p\") pod \"network-check-source-7c67b67d47-d9wjb\" (UID: \"9482fb93-c223-45ee-bde8-7667303270b6\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:46:15.193593 master-0 kubenswrapper[32968]: I0309 16:46:15.193511 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-utilities\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:15.193593 master-0 kubenswrapper[32968]: I0309 16:46:15.193535 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.193593 master-0 kubenswrapper[32968]: I0309 16:46:15.193556 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.193759 master-0 kubenswrapper[32968]: I0309 16:46:15.193658 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:15.193759 master-0 kubenswrapper[32968]: I0309 16:46:15.193682 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-serving-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.193759 master-0 kubenswrapper[32968]: I0309 16:46:15.193682 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-utilities\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:15.193759 master-0 kubenswrapper[32968]: I0309 16:46:15.193701 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcvbf\" (UniqueName: \"kubernetes.io/projected/a6cd9347-eec9-4549-9de4-6033112634ce-kube-api-access-lcvbf\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:15.193759 master-0 kubenswrapper[32968]: I0309 16:46:15.193756 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.193906 master-0 kubenswrapper[32968]: I0309 16:46:15.193785 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8796f37c-d1ec-469d-90df-e007bf620e8c-tmpfs\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:15.193906 master-0 kubenswrapper[32968]: I0309 16:46:15.193812 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.193906 master-0 kubenswrapper[32968]: I0309 16:46:15.193832 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.193906 master-0 kubenswrapper[32968]: I0309 16:46:15.193852 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.193906 master-0 kubenswrapper[32968]: I0309 16:46:15.193874 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:15.193906 master-0 kubenswrapper[32968]: I0309 16:46:15.193890 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8796f37c-d1ec-469d-90df-e007bf620e8c-tmpfs\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:15.194079 master-0 kubenswrapper[32968]: I0309 16:46:15.193946 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovn-node-metrics-cert\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.194079 master-0 kubenswrapper[32968]: I0309 16:46:15.193955 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-cnibin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.194079 master-0 kubenswrapper[32968]: I0309 16:46:15.194029 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.194079 master-0 kubenswrapper[32968]: I0309 16:46:15.194064 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl697\" (UniqueName: \"kubernetes.io/projected/ea34ff7e-27fa-4c26-acc0-ec551985eb76-kube-api-access-fl697\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.194225 master-0 kubenswrapper[32968]: I0309 16:46:15.194076 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-etc-kubernetes\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.194225 master-0 kubenswrapper[32968]: I0309 16:46:15.194075 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5565c060-5952-4e85-8873-18bb80663924-host-etc-kube\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:15.194225 master-0 kubenswrapper[32968]: I0309 16:46:15.194088 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:46:15.194225 master-0 kubenswrapper[32968]: I0309 16:46:15.194147 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-utilities\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:15.194225 master-0 kubenswrapper[32968]: I0309 16:46:15.194164 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-image-import-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.194225 master-0 kubenswrapper[32968]: I0309 16:46:15.194186 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grmch\" (UniqueName: \"kubernetes.io/projected/f3033e86-fee0-45dc-ba74-d5448a777400-kube-api-access-grmch\") pod \"migrator-57ccdf9b5-4vd54\" (UID: \"f3033e86-fee0-45dc-ba74-d5448a777400\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194252 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-policies\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194272 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-kubernetes\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194310 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-utilities\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194329 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194365 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl5kt\" (UniqueName: \"kubernetes.io/projected/8c93fb5d-373d-4473-99dd-50e4398bafbf-kube-api-access-nl5kt\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194405 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194454 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:15.194535 master-0 kubenswrapper[32968]: I0309 16:46:15.194537 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:46:15.195037 master-0 kubenswrapper[32968]: I0309 16:46:15.194578 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:15.195037 master-0 kubenswrapper[32968]: I0309 16:46:15.195014 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.195139 master-0 kubenswrapper[32968]: I0309 16:46:15.195104 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-webhook-cert\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.195182 master-0 kubenswrapper[32968]: I0309 16:46:15.195115 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.195227 master-0 kubenswrapper[32968]: I0309 16:46:15.195198 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:15.195276 master-0 kubenswrapper[32968]: I0309 16:46:15.195239 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:15.195276 master-0 kubenswrapper[32968]: I0309 16:46:15.195267 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.195374 master-0 kubenswrapper[32968]: I0309 16:46:15.195291 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-run\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.195374 master-0 kubenswrapper[32968]: I0309 16:46:15.195323 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:15.195374 master-0 kubenswrapper[32968]: I0309 16:46:15.195353 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:15.195551 master-0 kubenswrapper[32968]: I0309 16:46:15.195381 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:15.195551 master-0 kubenswrapper[32968]: I0309 16:46:15.195406 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.195551 master-0 kubenswrapper[32968]: I0309 16:46:15.195460 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/baf704e3-daf2-4934-a04e-d31df8df0c4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:15.195551 master-0 kubenswrapper[32968]: I0309 16:46:15.195489 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.195708 master-0 kubenswrapper[32968]: I0309 16:46:15.195683 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.195758 master-0 kubenswrapper[32968]: I0309 16:46:15.195716 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:15.195805 master-0 kubenswrapper[32968]: I0309 16:46:15.195747 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.195805 master-0 kubenswrapper[32968]: I0309 16:46:15.195781 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-bin\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.195805 master-0 kubenswrapper[32968]: I0309 16:46:15.195793 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-sys\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.195917 master-0 kubenswrapper[32968]: I0309 16:46:15.195810 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.195917 master-0 kubenswrapper[32968]: I0309 16:46:15.195836 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.195917 master-0 kubenswrapper[32968]: I0309 16:46:15.195911 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:15.196076 master-0 kubenswrapper[32968]: I0309 16:46:15.195939 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6b4992e-50f3-473c-aa83-ed35569ba307-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:15.196076 master-0 kubenswrapper[32968]: I0309 16:46:15.195947 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-conf-dir\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.196076 master-0 kubenswrapper[32968]: I0309 16:46:15.195968 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:15.196076 master-0 kubenswrapper[32968]: I0309 16:46:15.196029 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-tuned\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.196076 master-0 kubenswrapper[32968]: I0309 16:46:15.196052 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkfn\" (UniqueName: \"kubernetes.io/projected/79a8ea87-c29a-4248-927f-6f1acfc494d7-kube-api-access-rvkfn\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.196076 master-0 kubenswrapper[32968]: I0309 16:46:15.196074 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.196376 master-0 kubenswrapper[32968]: I0309 16:46:15.196139 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-tuned\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.196376 master-0 kubenswrapper[32968]: I0309 16:46:15.196163 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-multus-socket-dir-parent\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.196376 master-0 kubenswrapper[32968]: I0309 16:46:15.196351 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:15.196567 master-0 kubenswrapper[32968]: I0309 16:46:15.196438 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rjs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-kube-api-access-p8rjs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.196567 master-0 kubenswrapper[32968]: I0309 16:46:15.196467 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.196567 master-0 kubenswrapper[32968]: I0309 16:46:15.196494 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.196567 master-0 kubenswrapper[32968]: I0309 16:46:15.196520 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-textfile\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.196567 master-0 kubenswrapper[32968]: I0309 16:46:15.196545 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:15.196771 master-0 kubenswrapper[32968]: I0309 16:46:15.196628 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-textfile\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.196771 master-0 kubenswrapper[32968]: I0309 16:46:15.196665 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgl27\" (UniqueName: \"kubernetes.io/projected/1da6f189-535a-4bf1-bbdb-758327651ae2-kube-api-access-xgl27\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:15.196771 master-0 kubenswrapper[32968]: I0309 16:46:15.196711 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.196901 master-0 kubenswrapper[32968]: I0309 16:46:15.196776 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-key\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:46:15.196901 master-0 kubenswrapper[32968]: I0309 16:46:15.196807 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.196901 master-0 kubenswrapper[32968]: I0309 16:46:15.196827 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-host\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.196901 master-0 kubenswrapper[32968]: I0309 16:46:15.196879 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2qw\" (UniqueName: \"kubernetes.io/projected/8796f37c-d1ec-469d-90df-e007bf620e8c-kube-api-access-6n2qw\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:15.197028 master-0 kubenswrapper[32968]: I0309 16:46:15.196929 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.197028 master-0 kubenswrapper[32968]: I0309 16:46:15.196989 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.197028 master-0 kubenswrapper[32968]: I0309 16:46:15.197007 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ebbec674-ac49-422a-9548-5c29b15ad44d-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.197167 master-0 kubenswrapper[32968]: I0309 16:46:15.197051 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.197167 master-0 kubenswrapper[32968]: I0309 16:46:15.197083 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/ebbec674-ac49-422a-9548-5c29b15ad44d-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.197167 master-0 kubenswrapper[32968]: I0309 16:46:15.197100 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-catalog-content\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:15.197167 master-0 kubenswrapper[32968]: I0309 16:46:15.197140 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.197325 master-0 kubenswrapper[32968]: I0309 16:46:15.197174 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:46:15.197325 master-0 kubenswrapper[32968]: I0309 16:46:15.197214 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.197325 master-0 kubenswrapper[32968]: I0309 16:46:15.197242 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-catalog-content\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:15.197325 master-0 kubenswrapper[32968]: I0309 16:46:15.197252 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.197325 master-0 kubenswrapper[32968]: I0309 16:46:15.197311 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-868cs\" (UniqueName: \"kubernetes.io/projected/a320d845-3a5d-4027-a765-f0b2dc07f9de-kube-api-access-868cs\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:15.197507 master-0 kubenswrapper[32968]: I0309 16:46:15.197349 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.197507 master-0 kubenswrapper[32968]: I0309 16:46:15.197382 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8be2517a-6f28-4289-a108-6e3345a1e246-snapshots\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:15.197507 master-0 kubenswrapper[32968]: I0309 16:46:15.197409 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-catalog-content\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:15.197507 master-0 kubenswrapper[32968]: I0309 16:46:15.197484 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-serving-ca\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.197507 master-0 kubenswrapper[32968]: I0309 16:46:15.197454 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/8be2517a-6f28-4289-a108-6e3345a1e246-snapshots\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:15.197702 master-0 kubenswrapper[32968]: I0309 16:46:15.197628 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3745c679-2ea9-4382-9270-4d3fbbaaf296-catalog-content\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:15.197702 master-0 kubenswrapper[32968]: I0309 16:46:15.197675 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.197772 master-0 kubenswrapper[32968]: I0309 16:46:15.197712 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-kubelet\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.197772 master-0 kubenswrapper[32968]: I0309 16:46:15.197750 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbkx\" (UniqueName: \"kubernetes.io/projected/3ec3050d-8e6f-466a-995a-f78270408a85-kube-api-access-qsbkx\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:15.197866 master-0 kubenswrapper[32968]: I0309 16:46:15.197813 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.197866 master-0 kubenswrapper[32968]: I0309 16:46:15.197843 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-conf\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.197872 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-catalog-content\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.197913 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9cq\" (UniqueName: \"kubernetes.io/projected/aec186fc-aead-47fb-a7e1-8c9325897c47-kube-api-access-vj9cq\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198016 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198089 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkqg\" (UniqueName: \"kubernetes.io/projected/57036838-9f42-4ea1-a5c9-77f820cc22c9-kube-api-access-czkqg\") pod \"csi-snapshot-controller-7577d6f48-f594m\" (UID: \"57036838-9f42-4ea1-a5c9-77f820cc22c9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198022 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be856881-2ceb-4803-8330-4a27ad8b1937-catalog-content\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198152 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198179 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198202 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198226 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198227 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-hostroot\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198262 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpfl\" (UniqueName: \"kubernetes.io/projected/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-kube-api-access-shpfl\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198307 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.198400 master-0 kubenswrapper[32968]: I0309 16:46:15.198394 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-lib-modules\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198452 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnstc\" (UniqueName: \"kubernetes.io/projected/b9fc9e7d-652c-4063-9cdb-358f58cae29a-kube-api-access-xnstc\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198493 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8972b380-8f87-4b73-8f95-440d34d03884-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198574 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198633 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198669 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198702 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198741 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198774 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198816 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrms4\" (UniqueName: \"kubernetes.io/projected/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-kube-api-access-rrms4\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198819 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-os-release\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198912 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198945 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-utilities\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:15.198967 master-0 kubenswrapper[32968]: I0309 16:46:15.198971 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-var-lib-cni-multus\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.198971 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199015 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199038 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199062 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199096 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec186fc-aead-47fb-a7e1-8c9325897c47-utilities\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199145 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199178 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199202 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199230 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-systemd\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199264 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwnd\" (UniqueName: \"kubernetes.io/projected/8972b380-8f87-4b73-8f95-440d34d03884-kube-api-access-8hwnd\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199268 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-config\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199306 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:15.199336 master-0 kubenswrapper[32968]: I0309 16:46:15.199348 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199384 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199463 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199497 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199527 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kdqvv\" (UID: \"e346cb5b-411d-4014-a8d0-590d8deee8ac\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199563 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199579 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-env-overrides\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199595 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhglf\" (UniqueName: \"kubernetes.io/projected/baf704e3-daf2-4934-a04e-d31df8df0c4a-kube-api-access-nhglf\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199682 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.199787 master-0 kubenswrapper[32968]: I0309 16:46:15.199733 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqdm\" (UniqueName: \"kubernetes.io/projected/af4aa8d4-09e1-4589-b7bf-885617a11337-kube-api-access-whqdm\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:46:15.200081 master-0 kubenswrapper[32968]: I0309 16:46:15.199814 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.200081 master-0 kubenswrapper[32968]: I0309 16:46:15.199880 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.200081 master-0 kubenswrapper[32968]: I0309 16:46:15.199911 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:15.200081 master-0 kubenswrapper[32968]: I0309 16:46:15.199964 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.200241 master-0 kubenswrapper[32968]: I0309 16:46:15.200126 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:15.200241 master-0 kubenswrapper[32968]: I0309 16:46:15.200176 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:15.200241 master-0 kubenswrapper[32968]: I0309 16:46:15.200220 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc2t2\" (UniqueName: \"kubernetes.io/projected/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-kube-api-access-kc2t2\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.200241 master-0 kubenswrapper[32968]: I0309 16:46:15.200245 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-cabundle\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:46:15.200408 master-0 kubenswrapper[32968]: I0309 16:46:15.200270 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:15.200408 master-0 kubenswrapper[32968]: I0309 16:46:15.200308 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-catalog-content\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:15.200408 master-0 kubenswrapper[32968]: I0309 16:46:15.200332 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.200408 master-0 kubenswrapper[32968]: I0309 16:46:15.200333 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-ovnkube-identity-cm\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.200408 master-0 kubenswrapper[32968]: I0309 16:46:15.200351 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysconfig\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.200675 master-0 kubenswrapper[32968]: I0309 16:46:15.200456 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-catalog-content\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:15.200675 master-0 kubenswrapper[32968]: I0309 16:46:15.200470 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:15.200675 master-0 kubenswrapper[32968]: I0309 16:46:15.200520 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:15.200675 master-0 kubenswrapper[32968]: I0309 16:46:15.200598 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-encryption-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.200675 master-0 kubenswrapper[32968]: I0309 16:46:15.200643 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:15.200675 master-0 kubenswrapper[32968]: I0309 16:46:15.200669 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dzfq\" (UniqueName: \"kubernetes.io/projected/5587e967-124e-4f2a-b7fb-42cb16bfc337-kube-api-access-4dzfq\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200698 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200740 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-node-pullsecrets\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200773 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200836 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26xps\" (UniqueName: \"kubernetes.io/projected/e91a0e23-c95b-4290-9c0c-29101febfc8f-kube-api-access-26xps\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200918 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200961 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.200993 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:15.201046 master-0 kubenswrapper[32968]: I0309 16:46:15.201044 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ea34ff7e-27fa-4c26-acc0-ec551985eb76-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201084 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201118 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201151 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9fx\" (UniqueName: \"kubernetes.io/projected/8be2517a-6f28-4289-a108-6e3345a1e246-kube-api-access-hh9fx\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201180 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201232 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-service-ca\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201267 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201298 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5587e967-124e-4f2a-b7fb-42cb16bfc337-config-volume\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:15.201349 master-0 kubenswrapper[32968]: I0309 16:46:15.201347 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8rh\" (UniqueName: \"kubernetes.io/projected/92bd7735-8e3c-43bb-b543-03e6e6c5142a-kube-api-access-dv8rh\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:15.201707 master-0 kubenswrapper[32968]: I0309 16:46:15.201400 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-dir\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.201707 master-0 kubenswrapper[32968]: I0309 16:46:15.201469 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8hj\" (UniqueName: \"kubernetes.io/projected/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-kube-api-access-wn8hj\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:46:15.201707 master-0 kubenswrapper[32968]: I0309 16:46:15.201518 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:46:15.201707 master-0 kubenswrapper[32968]: I0309 16:46:15.201572 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.201707 master-0 kubenswrapper[32968]: I0309 16:46:15.201653 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.201707 master-0 kubenswrapper[32968]: I0309 16:46:15.201685 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201718 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201750 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201783 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201750 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-k8s-cni-cncf-io\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201826 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495rn\" (UniqueName: \"kubernetes.io/projected/357570a4-f69b-4970-9b6f-fe06fc4c2f90-kube-api-access-495rn\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201863 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-serving-cert\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201895 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jvl\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-kube-api-access-h8jvl\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.201942 master-0 kubenswrapper[32968]: I0309 16:46:15.201930 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzzg\" (UniqueName: \"kubernetes.io/projected/d6b4992e-50f3-473c-aa83-ed35569ba307-kube-api-access-bhzzg\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.201956 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.201996 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.202028 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.202063 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.202095 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/baf704e3-daf2-4934-a04e-d31df8df0c4a-rootfs\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.202148 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.202182 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.202269 master-0 kubenswrapper[32968]: I0309 16:46:15.202026 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d47955b-b85c-4137-9dea-ff0c20d5ab77-ovnkube-script-lib\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.202589 master-0 kubenswrapper[32968]: I0309 16:46:15.202353 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit-dir\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.202589 master-0 kubenswrapper[32968]: I0309 16:46:15.202359 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-os-release\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.202589 master-0 kubenswrapper[32968]: I0309 16:46:15.202499 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:15.202589 master-0 kubenswrapper[32968]: I0309 16:46:15.202548 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.202589 master-0 kubenswrapper[32968]: I0309 16:46:15.202590 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98bk\" (UniqueName: \"kubernetes.io/projected/be856881-2ceb-4803-8330-4a27ad8b1937-kube-api-access-v98bk\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:15.202778 master-0 kubenswrapper[32968]: I0309 16:46:15.202624 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:15.202778 master-0 kubenswrapper[32968]: I0309 16:46:15.202628 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/df2ec8b2-02d7-40c4-ac20-32615d689697-host-run-netns\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:15.202778 master-0 kubenswrapper[32968]: I0309 16:46:15.202702 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-var-lib-kubelet\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.202778 master-0 kubenswrapper[32968]: I0309 16:46:15.202733 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.202943 master-0 kubenswrapper[32968]: I0309 16:46:15.202780 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:15.202943 master-0 kubenswrapper[32968]: I0309 16:46:15.202829 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgj24\" (UniqueName: \"kubernetes.io/projected/3745c679-2ea9-4382-9270-4d3fbbaaf296-kube-api-access-jgj24\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:15.202943 master-0 kubenswrapper[32968]: I0309 16:46:15.202861 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.202943 master-0 kubenswrapper[32968]: I0309 16:46:15.202892 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-sys\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.203079 master-0 kubenswrapper[32968]: I0309 16:46:15.202974 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.203079 master-0 kubenswrapper[32968]: I0309 16:46:15.203004 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/709aad35-08ca-4ff5-abe5-e1558c8dc83f-iptables-alerter-script\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:15.203079 master-0 kubenswrapper[32968]: I0309 16:46:15.203052 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-encryption-config\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.203172 master-0 kubenswrapper[32968]: I0309 16:46:15.203118 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-modprobe-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.203227 master-0 kubenswrapper[32968]: I0309 16:46:15.203192 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.203295 master-0 kubenswrapper[32968]: I0309 16:46:15.203252 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mr7t\" (UniqueName: \"kubernetes.io/projected/c76178f6-3f0b-4b7d-ad23-724b83e35120-kube-api-access-2mr7t\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.203336 master-0 kubenswrapper[32968]: I0309 16:46:15.203296 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.203373 master-0 kubenswrapper[32968]: I0309 16:46:15.203329 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-trusted-ca-bundle\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.203456 master-0 kubenswrapper[32968]: I0309 16:46:15.203381 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ba020e0-1728-4e56-9618-d0ec3d9126eb-cnibin\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:15.203499 master-0 kubenswrapper[32968]: I0309 16:46:15.203464 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdb8\" (UniqueName: \"kubernetes.io/projected/34c0b60e-da69-452d-858d-0af77f18946d-kube-api-access-vmdb8\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:46:15.203545 master-0 kubenswrapper[32968]: I0309 16:46:15.203504 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-utilities\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:15.203585 master-0 kubenswrapper[32968]: I0309 16:46:15.203542 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.203685 master-0 kubenswrapper[32968]: I0309 16:46:15.203655 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1da6f189-535a-4bf1-bbdb-758327651ae2-utilities\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:15.203807 master-0 kubenswrapper[32968]: I0309 16:46:15.203781 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-env-overrides\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:15.205914 master-0 kubenswrapper[32968]: I0309 16:46:15.205874 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 09 16:46:15.226090 master-0 kubenswrapper[32968]: I0309 16:46:15.226023 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 09 16:46:15.246348 master-0 kubenswrapper[32968]: I0309 16:46:15.246279 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 09 16:46:15.266028 master-0 kubenswrapper[32968]: I0309 16:46:15.265962 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 09 16:46:15.270838 master-0 kubenswrapper[32968]: I0309 16:46:15.270803 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.289147 master-0 kubenswrapper[32968]: I0309 16:46:15.288015 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 09 16:46:15.304584 master-0 kubenswrapper[32968]: I0309 16:46:15.304337 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.304584 master-0 kubenswrapper[32968]: I0309 16:46:15.304437 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.304584 master-0 kubenswrapper[32968]: I0309 16:46:15.304504 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.304584 master-0 kubenswrapper[32968]: I0309 16:46:15.304585 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304616 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-conf\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304687 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304877 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304896 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-slash\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304917 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304959 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304972 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-kubelet\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.304990 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-etc-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.305003 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.305064 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.305155 master-0 kubenswrapper[32968]: I0309 16:46:15.305093 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305197 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-lib-modules\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305210 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-conf\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305300 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305373 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305388 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-netd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305392 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-lib-modules\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305401 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-systemd\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305449 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305501 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-systemd\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305616 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305658 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305714 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.305762 master-0 kubenswrapper[32968]: I0309 16:46:15.305778 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysconfig\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.305843 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysconfig\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.305929 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.305958 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-node-pullsecrets\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.306050 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ea34ff7e-27fa-4c26-acc0-ec551985eb76-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.306165 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-dir\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.306224 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.306373 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-systemd\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.306493 master-0 kubenswrapper[32968]: I0309 16:46:15.306446 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306538 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306584 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-dir\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306634 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/ea34ff7e-27fa-4c26-acc0-ec551985eb76-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306764 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/baf704e3-daf2-4934-a04e-d31df8df0c4a-rootfs\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306768 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-node-pullsecrets\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306798 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306828 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-systemd-units\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306859 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/baf704e3-daf2-4934-a04e-d31df8df0c4a-rootfs\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306879 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306909 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit-dir\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.306933 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"installer-4-master-0\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.307021 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-var-lib-kubelet\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.307063 master-0 kubenswrapper[32968]: I0309 16:46:15.307079 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307005 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit-dir\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307123 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-log-socket\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307127 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-var-lib-kubelet\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307151 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-sys\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307196 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-sys\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307292 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-modprobe-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307511 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-modprobe-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307523 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307626 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-cni-bin\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307840 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307952 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307967 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.307995 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308015 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-hosts-file\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308089 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-wtmp\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308100 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-hosts-file\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308161 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308206 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-wtmp\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308230 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-sysctl-d\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308257 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.308596 master-0 kubenswrapper[32968]: I0309 16:46:15.308284 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-netns\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.309546 master-0 kubenswrapper[32968]: I0309 16:46:15.309245 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.309546 master-0 kubenswrapper[32968]: I0309 16:46:15.309331 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.309546 master-0 kubenswrapper[32968]: I0309 16:46:15.309376 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-root\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.309546 master-0 kubenswrapper[32968]: I0309 16:46:15.309460 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-root\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.309546 master-0 kubenswrapper[32968]: I0309 16:46:15.309500 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.309546 master-0 kubenswrapper[32968]: I0309 16:46:15.309522 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-node-log\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.309904 master-0 kubenswrapper[32968]: I0309 16:46:15.309726 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-kubernetes\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.309974 master-0 kubenswrapper[32968]: I0309 16:46:15.309918 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-etc-kubernetes\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.310043 master-0 kubenswrapper[32968]: I0309 16:46:15.309982 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-run\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.310043 master-0 kubenswrapper[32968]: I0309 16:46:15.310036 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:15.310165 master-0 kubenswrapper[32968]: I0309 16:46:15.310074 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.310165 master-0 kubenswrapper[32968]: I0309 16:46:15.310120 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/709aad35-08ca-4ff5-abe5-e1558c8dc83f-host-slash\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:15.310165 master-0 kubenswrapper[32968]: I0309 16:46:15.310155 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.310346 master-0 kubenswrapper[32968]: I0309 16:46:15.310176 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-sys\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.310346 master-0 kubenswrapper[32968]: I0309 16:46:15.310236 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-var-lib-openvswitch\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.310346 master-0 kubenswrapper[32968]: I0309 16:46:15.310297 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-run\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.310580 master-0 kubenswrapper[32968]: I0309 16:46:15.310402 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.310580 master-0 kubenswrapper[32968]: I0309 16:46:15.310487 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc9e7d-652c-4063-9cdb-358f58cae29a-sys\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:15.310580 master-0 kubenswrapper[32968]: I0309 16:46:15.310536 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d47955b-b85c-4137-9dea-ff0c20d5ab77-run-ovn\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:15.310716 master-0 kubenswrapper[32968]: I0309 16:46:15.310616 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-host\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.310716 master-0 kubenswrapper[32968]: I0309 16:46:15.310649 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:15.310716 master-0 kubenswrapper[32968]: I0309 16:46:15.310663 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c76178f6-3f0b-4b7d-ad23-724b83e35120-host\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:15.315857 master-0 kubenswrapper[32968]: I0309 16:46:15.315808 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 09 16:46:15.323009 master-0 kubenswrapper[32968]: I0309 16:46:15.322898 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:15.325766 master-0 kubenswrapper[32968]: I0309 16:46:15.325728 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 09 16:46:15.352193 master-0 kubenswrapper[32968]: I0309 16:46:15.352144 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 09 16:46:15.357520 master-0 kubenswrapper[32968]: I0309 16:46:15.357466 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:15.365991 master-0 kubenswrapper[32968]: I0309 16:46:15.365834 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 09 16:46:15.385310 master-0 kubenswrapper[32968]: I0309 16:46:15.385160 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 09 16:46:15.406498 master-0 kubenswrapper[32968]: I0309 16:46:15.406272 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 09 16:46:15.411912 master-0 kubenswrapper[32968]: I0309 16:46:15.411868 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-client\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.426993 master-0 kubenswrapper[32968]: I0309 16:46:15.426938 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 09 16:46:15.433004 master-0 kubenswrapper[32968]: I0309 16:46:15.432916 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-serving-cert\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.446256 master-0 kubenswrapper[32968]: I0309 16:46:15.446223 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 09 16:46:15.450872 master-0 kubenswrapper[32968]: I0309 16:46:15.450752 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-cabundle\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:46:15.466224 master-0 kubenswrapper[32968]: I0309 16:46:15.466032 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 09 16:46:15.471677 master-0 kubenswrapper[32968]: I0309 16:46:15.471649 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-encryption-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.485292 master-0 kubenswrapper[32968]: I0309 16:46:15.485235 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 09 16:46:15.487912 master-0 kubenswrapper[32968]: I0309 16:46:15.487888 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/af4aa8d4-09e1-4589-b7bf-885617a11337-signing-key\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:46:15.506182 master-0 kubenswrapper[32968]: I0309 16:46:15.506110 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 09 16:46:15.526476 master-0 kubenswrapper[32968]: I0309 16:46:15.526408 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 09 16:46:15.535212 master-0 kubenswrapper[32968]: I0309 16:46:15.535149 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-image-import-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.546145 master-0 kubenswrapper[32968]: I0309 16:46:15.546097 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 09 16:46:15.553625 master-0 kubenswrapper[32968]: I0309 16:46:15.553596 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-audit\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.565217 master-0 kubenswrapper[32968]: I0309 16:46:15.565135 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 09 16:46:15.587090 master-0 kubenswrapper[32968]: I0309 16:46:15.586317 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 09 16:46:15.593586 master-0 kubenswrapper[32968]: I0309 16:46:15.593553 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-encryption-config\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.605046 master-0 kubenswrapper[32968]: I0309 16:46:15.605015 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 09 16:46:15.612588 master-0 kubenswrapper[32968]: I0309 16:46:15.612569 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-serving-cert\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.625960 master-0 kubenswrapper[32968]: I0309 16:46:15.625340 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 09 16:46:15.632908 master-0 kubenswrapper[32968]: I0309 16:46:15.632851 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-client\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.646297 master-0 kubenswrapper[32968]: I0309 16:46:15.646226 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 09 16:46:15.654577 master-0 kubenswrapper[32968]: I0309 16:46:15.654521 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-config\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.667275 master-0 kubenswrapper[32968]: I0309 16:46:15.666728 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 09 16:46:15.668141 master-0 kubenswrapper[32968]: I0309 16:46:15.668076 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-etcd-serving-ca\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.686194 master-0 kubenswrapper[32968]: I0309 16:46:15.686122 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 09 16:46:15.692782 master-0 kubenswrapper[32968]: I0309 16:46:15.692634 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-trusted-ca-bundle\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.706287 master-0 kubenswrapper[32968]: I0309 16:46:15.706254 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 09 16:46:15.726216 master-0 kubenswrapper[32968]: I0309 16:46:15.726164 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 09 16:46:15.747029 master-0 kubenswrapper[32968]: I0309 16:46:15.746951 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 09 16:46:15.754178 master-0 kubenswrapper[32968]: I0309 16:46:15.754111 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-etcd-serving-ca\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.765911 master-0 kubenswrapper[32968]: I0309 16:46:15.765848 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 16:46:15.772682 master-0 kubenswrapper[32968]: I0309 16:46:15.772637 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-service-ca\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.785528 master-0 kubenswrapper[32968]: I0309 16:46:15.785203 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-jtzms" Mar 09 16:46:15.806804 master-0 kubenswrapper[32968]: I0309 16:46:15.806734 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 16:46:15.814857 master-0 kubenswrapper[32968]: I0309 16:46:15.814792 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:15.823984 master-0 kubenswrapper[32968]: I0309 16:46:15.823930 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:15.826590 master-0 kubenswrapper[32968]: I0309 16:46:15.826515 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 16:46:15.846508 master-0 kubenswrapper[32968]: I0309 16:46:15.846392 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 09 16:46:15.873079 master-0 kubenswrapper[32968]: I0309 16:46:15.872988 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 09 16:46:15.874993 master-0 kubenswrapper[32968]: I0309 16:46:15.874945 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-trusted-ca-bundle\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:15.886731 master-0 kubenswrapper[32968]: I0309 16:46:15.886666 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 09 16:46:15.895593 master-0 kubenswrapper[32968]: I0309 16:46:15.895510 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c93fb5d-373d-4473-99dd-50e4398bafbf-audit-policies\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:15.924609 master-0 kubenswrapper[32968]: I0309 16:46:15.924470 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") pod \"696fcca2-df1a-491d-956d-1cfda1ee5e70\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " Mar 09 16:46:15.924609 master-0 kubenswrapper[32968]: I0309 16:46:15.924579 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock" (OuterVolumeSpecName: "var-lock") pod "696fcca2-df1a-491d-956d-1cfda1ee5e70" (UID: "696fcca2-df1a-491d-956d-1cfda1ee5e70"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:15.925169 master-0 kubenswrapper[32968]: I0309 16:46:15.924691 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") pod \"696fcca2-df1a-491d-956d-1cfda1ee5e70\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " Mar 09 16:46:15.925169 master-0 kubenswrapper[32968]: I0309 16:46:15.924734 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "696fcca2-df1a-491d-956d-1cfda1ee5e70" (UID: "696fcca2-df1a-491d-956d-1cfda1ee5e70"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:15.926988 master-0 kubenswrapper[32968]: I0309 16:46:15.926957 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:15.927044 master-0 kubenswrapper[32968]: E0309 16:46:15.926977 32968 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 09 16:46:15.927044 master-0 kubenswrapper[32968]: I0309 16:46:15.926988 32968 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/696fcca2-df1a-491d-956d-1cfda1ee5e70-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:15.946461 master-0 kubenswrapper[32968]: I0309 16:46:15.946343 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-ccfvc" Mar 09 16:46:15.966527 master-0 kubenswrapper[32968]: I0309 16:46:15.966450 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 09 16:46:15.971012 master-0 kubenswrapper[32968]: I0309 16:46:15.970911 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5587e967-124e-4f2a-b7fb-42cb16bfc337-metrics-tls\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:15.986444 master-0 kubenswrapper[32968]: I0309 16:46:15.986360 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-fhhfg" Mar 09 16:46:16.006093 master-0 kubenswrapper[32968]: I0309 16:46:16.006028 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 09 16:46:16.012240 master-0 kubenswrapper[32968]: I0309 16:46:16.012180 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5587e967-124e-4f2a-b7fb-42cb16bfc337-config-volume\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:16.025971 master-0 kubenswrapper[32968]: I0309 16:46:16.025921 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 09 16:46:16.047294 master-0 kubenswrapper[32968]: I0309 16:46:16.047219 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 09 16:46:16.065662 master-0 kubenswrapper[32968]: I0309 16:46:16.065588 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 09 16:46:16.066169 master-0 kubenswrapper[32968]: I0309 16:46:16.066112 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/baf704e3-daf2-4934-a04e-d31df8df0c4a-mcd-auth-proxy-config\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:16.069431 master-0 kubenswrapper[32968]: I0309 16:46:16.069376 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8972b380-8f87-4b73-8f95-440d34d03884-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:16.073013 master-0 kubenswrapper[32968]: I0309 16:46:16.072973 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:16.084127 master-0 kubenswrapper[32968]: I0309 16:46:16.084058 32968 request.go:700] Waited for 1.005463145s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Mar 09 16:46:16.089636 master-0 kubenswrapper[32968]: I0309 16:46:16.087338 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 09 16:46:16.097519 master-0 kubenswrapper[32968]: I0309 16:46:16.097472 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d6b4992e-50f3-473c-aa83-ed35569ba307-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:16.107131 master-0 kubenswrapper[32968]: I0309 16:46:16.107090 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 09 16:46:16.125706 master-0 kubenswrapper[32968]: I0309 16:46:16.125618 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 09 16:46:16.146466 master-0 kubenswrapper[32968]: I0309 16:46:16.146388 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-2l9mk" Mar 09 16:46:16.167055 master-0 kubenswrapper[32968]: I0309 16:46:16.166999 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 09 16:46:16.186014 master-0 kubenswrapper[32968]: I0309 16:46:16.185964 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-q2k6n" Mar 09 16:46:16.192106 master-0 kubenswrapper[32968]: E0309 16:46:16.192067 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192208 master-0 kubenswrapper[32968]: E0309 16:46:16.192128 32968 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192208 master-0 kubenswrapper[32968]: E0309 16:46:16.192168 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca podName:e5c4ccb0-f795-44bd-9bb4-baf84564c239 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692146209 +0000 UTC m=+2.795468749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca") pod "prometheus-operator-5ff8674d55-hnc7v" (UID: "e5c4ccb0-f795-44bd-9bb4-baf84564c239") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192208 master-0 kubenswrapper[32968]: E0309 16:46:16.192197 32968 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192365 master-0 kubenswrapper[32968]: E0309 16:46:16.192235 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692227471 +0000 UTC m=+2.795550011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192365 master-0 kubenswrapper[32968]: E0309 16:46:16.192269 32968 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192365 master-0 kubenswrapper[32968]: E0309 16:46:16.192277 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle podName:8be2517a-6f28-4289-a108-6e3345a1e246 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692247061 +0000 UTC m=+2.795569601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle") pod "insights-operator-8f89dfddd-5fjz8" (UID: "8be2517a-6f28-4289-a108-6e3345a1e246") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192365 master-0 kubenswrapper[32968]: E0309 16:46:16.192295 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692288782 +0000 UTC m=+2.795611322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192365 master-0 kubenswrapper[32968]: E0309 16:46:16.192300 32968 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192365 master-0 kubenswrapper[32968]: E0309 16:46:16.192337 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config podName:ea34ff7e-27fa-4c26-acc0-ec551985eb76 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692330344 +0000 UTC m=+2.795652884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" (UID: "ea34ff7e-27fa-4c26-acc0-ec551985eb76") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192642 master-0 kubenswrapper[32968]: E0309 16:46:16.192601 32968 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192642 master-0 kubenswrapper[32968]: E0309 16:46:16.192643 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692632991 +0000 UTC m=+2.795955531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192758 master-0 kubenswrapper[32968]: E0309 16:46:16.192688 32968 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192758 master-0 kubenswrapper[32968]: E0309 16:46:16.192716 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692708693 +0000 UTC m=+2.796031233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.192758 master-0 kubenswrapper[32968]: E0309 16:46:16.192750 32968 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.192883 master-0 kubenswrapper[32968]: E0309 16:46:16.192776 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert podName:8796f37c-d1ec-469d-90df-e007bf620e8c nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.692769075 +0000 UTC m=+2.796091615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert") pod "packageserver-775b84c99f-6ffjr" (UID: "8796f37c-d1ec-469d-90df-e007bf620e8c") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.194313 master-0 kubenswrapper[32968]: E0309 16:46:16.194242 32968 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.194313 master-0 kubenswrapper[32968]: E0309 16:46:16.194279 32968 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.194444 master-0 kubenswrapper[32968]: E0309 16:46:16.194347 32968 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.194444 master-0 kubenswrapper[32968]: E0309 16:46:16.194356 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.694298626 +0000 UTC m=+2.797621166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.194444 master-0 kubenswrapper[32968]: E0309 16:46:16.194437 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs podName:e91a0e23-c95b-4290-9c0c-29101febfc8f nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.694400559 +0000 UTC m=+2.797723099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs") pod "multus-admission-controller-7769569c45-jcsfw" (UID: "e91a0e23-c95b-4290-9c0c-29101febfc8f") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.194574 master-0 kubenswrapper[32968]: E0309 16:46:16.194463 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.69445263 +0000 UTC m=+2.797775170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.195580 master-0 kubenswrapper[32968]: E0309 16:46:16.195547 32968 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195580 master-0 kubenswrapper[32968]: E0309 16:46:16.195577 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195663 master-0 kubenswrapper[32968]: E0309 16:46:16.195607 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls podName:e5c4ccb0-f795-44bd-9bb4-baf84564c239 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.69559427 +0000 UTC m=+2.798916820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-hnc7v" (UID: "e5c4ccb0-f795-44bd-9bb4-baf84564c239") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195663 master-0 kubenswrapper[32968]: E0309 16:46:16.195627 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.195663 master-0 kubenswrapper[32968]: E0309 16:46:16.195637 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs podName:82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695623931 +0000 UTC m=+2.798946481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs") pod "machine-config-server-7d5bx" (UID: "82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195663 32968 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-5k05m0jd20f8o: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195682 32968 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195692 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695671992 +0000 UTC m=+2.798994762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195709 32968 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195721 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695705513 +0000 UTC m=+2.799028293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195724 32968 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195741 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca podName:8677cbd3-649f-41cd-8b8a-eadca971906b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695733644 +0000 UTC m=+2.799056424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca") pod "route-controller-manager-675f85b8f7-bt9gb" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.195766 master-0 kubenswrapper[32968]: E0309 16:46:16.195766 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert podName:a320d845-3a5d-4027-a765-f0b2dc07f9de nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695753664 +0000 UTC m=+2.799076444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-6zcn7" (UID: "a320d845-3a5d-4027-a765-f0b2dc07f9de") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.196021 master-0 kubenswrapper[32968]: E0309 16:46:16.195776 32968 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.196021 master-0 kubenswrapper[32968]: E0309 16:46:16.195781 32968 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.196021 master-0 kubenswrapper[32968]: E0309 16:46:16.195785 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695777405 +0000 UTC m=+2.799099945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.196021 master-0 kubenswrapper[32968]: E0309 16:46:16.195807 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert podName:631f2bdf-2ed4-4315-98c3-c5a538d0aec3 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695799365 +0000 UTC m=+2.799122155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-8nlvp" (UID: "631f2bdf-2ed4-4315-98c3-c5a538d0aec3") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.196021 master-0 kubenswrapper[32968]: E0309 16:46:16.195826 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle podName:8be2517a-6f28-4289-a108-6e3345a1e246 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.695815996 +0000 UTC m=+2.799138556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle") pod "insights-operator-8f89dfddd-5fjz8" (UID: "8be2517a-6f28-4289-a108-6e3345a1e246") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.196957 master-0 kubenswrapper[32968]: E0309 16:46:16.196920 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.197033 master-0 kubenswrapper[32968]: E0309 16:46:16.196964 32968 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.197033 master-0 kubenswrapper[32968]: E0309 16:46:16.196981 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token podName:82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.696966687 +0000 UTC m=+2.800289227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token") pod "machine-config-server-7d5bx" (UID: "82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.197033 master-0 kubenswrapper[32968]: E0309 16:46:16.197025 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197028 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls podName:92bd7735-8e3c-43bb-b543-03e6e6c5142a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.697013518 +0000 UTC m=+2.800336058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-j9x6n" (UID: "92bd7735-8e3c-43bb-b543-03e6e6c5142a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197027 32968 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197054 32968 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197069 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.69705848 +0000 UTC m=+2.800381030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197090 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images podName:a6cd9347-eec9-4549-9de4-6033112634ce nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.69708018 +0000 UTC m=+2.800402720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images") pod "machine-api-operator-84bf6db4f9-4qg6v" (UID: "a6cd9347-eec9-4549-9de4-6033112634ce") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197112 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config podName:3ec3050d-8e6f-466a-995a-f78270408a85 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.697099621 +0000 UTC m=+2.800422171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config") pod "machine-approver-754bdc9f9d-pfbvg" (UID: "3ec3050d-8e6f-466a-995a-f78270408a85") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197127 master-0 kubenswrapper[32968]: E0309 16:46:16.197114 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197337 master-0 kubenswrapper[32968]: E0309 16:46:16.197138 32968 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.197337 master-0 kubenswrapper[32968]: E0309 16:46:16.197146 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197337 master-0 kubenswrapper[32968]: E0309 16:46:16.197160 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.697149862 +0000 UTC m=+2.800472402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.197337 master-0 kubenswrapper[32968]: E0309 16:46:16.197176 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls podName:3ec3050d-8e6f-466a-995a-f78270408a85 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.697167642 +0000 UTC m=+2.800490192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls") pod "machine-approver-754bdc9f9d-pfbvg" (UID: "3ec3050d-8e6f-466a-995a-f78270408a85") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.197337 master-0 kubenswrapper[32968]: E0309 16:46:16.197201 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.697190963 +0000 UTC m=+2.800513513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.198333 master-0 kubenswrapper[32968]: E0309 16:46:16.198301 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.198383 master-0 kubenswrapper[32968]: E0309 16:46:16.198353 32968 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.198383 master-0 kubenswrapper[32968]: E0309 16:46:16.198368 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.698357254 +0000 UTC m=+2.801679794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198394 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198408 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.698394835 +0000 UTC m=+2.801717395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198452 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.698441506 +0000 UTC m=+2.801764056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198452 32968 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198482 32968 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198511 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert podName:18f0164f-0875-4668-b155-df69e05e8ae0 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.698502517 +0000 UTC m=+2.801825067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert") pod "ingress-canary-nxtms" (UID: "18f0164f-0875-4668-b155-df69e05e8ae0") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.198534 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle podName:73f1f0ba-f90e-45aa-b1ba-df011a5b9d56 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.698523868 +0000 UTC m=+2.801846418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle") pod "router-default-79f8cd6fdd-rvnwf" (UID: "73f1f0ba-f90e-45aa-b1ba-df011a5b9d56") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199609 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199652 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199672 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls podName:baf704e3-daf2-4934-a04e-d31df8df0c4a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699658909 +0000 UTC m=+2.802981459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls") pod "machine-config-daemon-94s4v" (UID: "baf704e3-daf2-4934-a04e-d31df8df0c4a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199687 32968 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199701 32968 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199729 32968 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199733 32968 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199752 32968 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199765 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199771 32968 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199784 32968 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199795 32968 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199811 32968 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199694 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699682949 +0000 UTC m=+2.803005499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199839 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates podName:e346cb5b-411d-4014-a8d0-590d8deee8ac nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699830383 +0000 UTC m=+2.803152933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates") pod "prometheus-operator-admission-webhook-8464df8497-kdqvv" (UID: "e346cb5b-411d-4014-a8d0-590d8deee8ac") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199863 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls podName:a6cd9347-eec9-4549-9de4-6033112634ce nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699854674 +0000 UTC m=+2.803177224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-4qg6v" (UID: "a6cd9347-eec9-4549-9de4-6033112634ce") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.199864 master-0 kubenswrapper[32968]: E0309 16:46:16.199881 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config podName:8d1829b3-643f-4f79-b6de-ae6ca5e78d4a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699873874 +0000 UTC m=+2.803196424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-jzjhh" (UID: "8d1829b3-643f-4f79-b6de-ae6ca5e78d4a") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.199902 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls podName:34c0b60e-da69-452d-858d-0af77f18946d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699892185 +0000 UTC m=+2.803214735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-wd5cw" (UID: "34c0b60e-da69-452d-858d-0af77f18946d") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.199922 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls podName:ea34ff7e-27fa-4c26-acc0-ec551985eb76 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699912135 +0000 UTC m=+2.803234685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" (UID: "ea34ff7e-27fa-4c26-acc0-ec551985eb76") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.199940 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls podName:8972b380-8f87-4b73-8f95-440d34d03884 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699932436 +0000 UTC m=+2.803254996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls") pod "machine-config-controller-ff46b7bdf-xqpdd" (UID: "8972b380-8f87-4b73-8f95-440d34d03884") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.199970 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images podName:d6b4992e-50f3-473c-aa83-ed35569ba307 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699959947 +0000 UTC m=+2.803282497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images") pod "machine-config-operator-fdb5c78b5-db9vp" (UID: "d6b4992e-50f3-473c-aa83-ed35569ba307") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.199994 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.699985267 +0000 UTC m=+2.803307817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.200003 32968 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.200013 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert podName:8796f37c-d1ec-469d-90df-e007bf620e8c nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.700004458 +0000 UTC m=+2.803327008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert") pod "packageserver-775b84c99f-6ffjr" (UID: "8796f37c-d1ec-469d-90df-e007bf620e8c") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.200038 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth podName:73f1f0ba-f90e-45aa-b1ba-df011a5b9d56 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.700029758 +0000 UTC m=+2.803352308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth") pod "router-default-79f8cd6fdd-rvnwf" (UID: "73f1f0ba-f90e-45aa-b1ba-df011a5b9d56") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.200058 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images podName:ea34ff7e-27fa-4c26-acc0-ec551985eb76 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.700049669 +0000 UTC m=+2.803372219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" (UID: "ea34ff7e-27fa-4c26-acc0-ec551985eb76") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.200166 32968 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.200645 master-0 kubenswrapper[32968]: E0309 16:46:16.200222 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config podName:e5c4ccb0-f795-44bd-9bb4-baf84564c239 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.700210483 +0000 UTC m=+2.803533033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-hnc7v" (UID: "e5c4ccb0-f795-44bd-9bb4-baf84564c239") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201228 master-0 kubenswrapper[32968]: E0309 16:46:16.201148 32968 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201228 master-0 kubenswrapper[32968]: E0309 16:46:16.201183 32968 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201228 master-0 kubenswrapper[32968]: E0309 16:46:16.201206 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config podName:a6cd9347-eec9-4549-9de4-6033112634ce nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701191569 +0000 UTC m=+2.804514269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config") pod "machine-api-operator-84bf6db4f9-4qg6v" (UID: "a6cd9347-eec9-4549-9de4-6033112634ce") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201218 32968 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201244 32968 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201245 32968 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201254 32968 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201229 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.70121769 +0000 UTC m=+2.804540230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201288 32968 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201292 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert podName:8d1829b3-643f-4f79-b6de-ae6ca5e78d4a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701282341 +0000 UTC m=+2.804604901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert") pod "cluster-autoscaler-operator-69576476f7-jzjhh" (UID: "8d1829b3-643f-4f79-b6de-ae6ca5e78d4a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201316 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca podName:a320d845-3a5d-4027-a765-f0b2dc07f9de nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701305402 +0000 UTC m=+2.804627952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-6zcn7" (UID: "a320d845-3a5d-4027-a765-f0b2dc07f9de") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201327 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201330 32968 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201339 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config podName:92bd7735-8e3c-43bb-b543-03e6e6c5142a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701329453 +0000 UTC m=+2.804652003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-j9x6n" (UID: "92bd7735-8e3c-43bb-b543-03e6e6c5142a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201339 master-0 kubenswrapper[32968]: E0309 16:46:16.201344 32968 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201328 32968 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201359 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config podName:3ec3050d-8e6f-466a-995a-f78270408a85 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701351603 +0000 UTC m=+2.804674153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config") pod "machine-approver-754bdc9f9d-pfbvg" (UID: "3ec3050d-8e6f-466a-995a-f78270408a85") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201441 32968 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201459 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs podName:73f1f0ba-f90e-45aa-b1ba-df011a5b9d56 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701447286 +0000 UTC m=+2.804770006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs") pod "router-default-79f8cd6fdd-rvnwf" (UID: "73f1f0ba-f90e-45aa-b1ba-df011a5b9d56") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201483 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701471226 +0000 UTC m=+2.804793976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201505 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701492897 +0000 UTC m=+2.804815667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201523 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert podName:8677cbd3-649f-41cd-8b8a-eadca971906b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701515597 +0000 UTC m=+2.804838147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert") pod "route-controller-manager-675f85b8f7-bt9gb" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201545 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701533438 +0000 UTC m=+2.804856208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.201795 master-0 kubenswrapper[32968]: E0309 16:46:16.201576 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.701567859 +0000 UTC m=+2.804890639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.202647 master-0 kubenswrapper[32968]: E0309 16:46:16.202593 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.202647 master-0 kubenswrapper[32968]: E0309 16:46:16.202628 32968 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.202780 master-0 kubenswrapper[32968]: E0309 16:46:16.202670 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca podName:92bd7735-8e3c-43bb-b543-03e6e6c5142a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.702655968 +0000 UTC m=+2.805978708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-j9x6n" (UID: "92bd7735-8e3c-43bb-b543-03e6e6c5142a") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.202780 master-0 kubenswrapper[32968]: E0309 16:46:16.202677 32968 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202780 master-0 kubenswrapper[32968]: E0309 16:46:16.202702 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config podName:8677cbd3-649f-41cd-8b8a-eadca971906b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.702688189 +0000 UTC m=+2.806010969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config") pod "route-controller-manager-675f85b8f7-bt9gb" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:16.202780 master-0 kubenswrapper[32968]: E0309 16:46:16.202723 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls podName:357570a4-f69b-4970-9b6f-fe06fc4c2f90 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.70271195 +0000 UTC m=+2.806034490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-cvdzq" (UID: "357570a4-f69b-4970-9b6f-fe06fc4c2f90") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202780 master-0 kubenswrapper[32968]: E0309 16:46:16.202753 32968 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202962 master-0 kubenswrapper[32968]: E0309 16:46:16.202793 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate podName:73f1f0ba-f90e-45aa-b1ba-df011a5b9d56 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.702786122 +0000 UTC m=+2.806108662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate") pod "router-default-79f8cd6fdd-rvnwf" (UID: "73f1f0ba-f90e-45aa-b1ba-df011a5b9d56") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202962 master-0 kubenswrapper[32968]: E0309 16:46:16.202793 32968 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202962 master-0 kubenswrapper[32968]: E0309 16:46:16.202811 32968 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202962 master-0 kubenswrapper[32968]: E0309 16:46:16.202842 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert podName:8be2517a-6f28-4289-a108-6e3345a1e246 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.702834983 +0000 UTC m=+2.806157523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert") pod "insights-operator-8f89dfddd-5fjz8" (UID: "8be2517a-6f28-4289-a108-6e3345a1e246") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.202962 master-0 kubenswrapper[32968]: E0309 16:46:16.202862 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:16.702853043 +0000 UTC m=+2.806175583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:16.206110 master-0 kubenswrapper[32968]: I0309 16:46:16.206078 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 09 16:46:16.224831 master-0 kubenswrapper[32968]: I0309 16:46:16.224768 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 09 16:46:16.262403 master-0 kubenswrapper[32968]: I0309 16:46:16.262310 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 09 16:46:16.265156 master-0 kubenswrapper[32968]: I0309 16:46:16.265099 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 09 16:46:16.285890 master-0 kubenswrapper[32968]: I0309 16:46:16.285837 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 09 16:46:16.306326 master-0 kubenswrapper[32968]: I0309 16:46:16.306269 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-d68b9" Mar 09 16:46:16.326612 master-0 kubenswrapper[32968]: I0309 16:46:16.326532 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-n686v" Mar 09 16:46:16.327519 master-0 kubenswrapper[32968]: I0309 16:46:16.327467 32968 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 09 16:46:16.330790 master-0 kubenswrapper[32968]: I0309 16:46:16.330754 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 09 16:46:16.330901 master-0 kubenswrapper[32968]: I0309 16:46:16.330802 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 09 16:46:16.330901 master-0 kubenswrapper[32968]: I0309 16:46:16.330812 32968 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 09 16:46:16.331127 master-0 kubenswrapper[32968]: I0309 16:46:16.331099 32968 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 09 16:46:16.345716 master-0 kubenswrapper[32968]: I0309 16:46:16.345650 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 09 16:46:16.366881 master-0 kubenswrapper[32968]: I0309 16:46:16.366775 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 09 16:46:16.381891 master-0 kubenswrapper[32968]: I0309 16:46:16.381771 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:16.386259 master-0 kubenswrapper[32968]: I0309 16:46:16.386131 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 09 16:46:16.419396 master-0 kubenswrapper[32968]: I0309 16:46:16.419315 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrt7m\" (UniqueName: \"kubernetes.io/projected/5565c060-5952-4e85-8873-18bb80663924-kube-api-access-rrt7m\") pod \"network-operator-7c649bf6d4-r82z7\" (UID: \"5565c060-5952-4e85-8873-18bb80663924\") " pod="openshift-network-operator/network-operator-7c649bf6d4-r82z7" Mar 09 16:46:16.426349 master-0 kubenswrapper[32968]: I0309 16:46:16.426289 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 09 16:46:16.457933 master-0 kubenswrapper[32968]: I0309 16:46:16.457768 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a4491c-12cc-4531-ad3e-246e93ed7842-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-rcf8n\" (UID: \"34a4491c-12cc-4531-ad3e-246e93ed7842\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-rcf8n" Mar 09 16:46:16.465704 master-0 kubenswrapper[32968]: I0309 16:46:16.465643 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 09 16:46:16.492031 master-0 kubenswrapper[32968]: I0309 16:46:16.491973 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 09 16:46:16.505854 master-0 kubenswrapper[32968]: I0309 16:46:16.505594 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqzqk" Mar 09 16:46:16.511973 master-0 kubenswrapper[32968]: I0309 16:46:16.511895 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"696fcca2-df1a-491d-956d-1cfda1ee5e70","Type":"ContainerDied","Data":"48ea3b1c1a43df7f7909a26935d767da157bf2e1b5a1c65e482d9227e70712b8"} Mar 09 16:46:16.511973 master-0 kubenswrapper[32968]: I0309 16:46:16.511971 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ea3b1c1a43df7f7909a26935d767da157bf2e1b5a1c65e482d9227e70712b8" Mar 09 16:46:16.511973 master-0 kubenswrapper[32968]: I0309 16:46:16.511927 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 09 16:46:16.537519 master-0 kubenswrapper[32968]: I0309 16:46:16.537412 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cf9eae5-38bc-48fa-8339-d0751bb18e8c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-56m4c\" (UID: \"6cf9eae5-38bc-48fa-8339-d0751bb18e8c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-56m4c" Mar 09 16:46:16.560709 master-0 kubenswrapper[32968]: I0309 16:46:16.560653 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsqs\" (UniqueName: \"kubernetes.io/projected/e2e38be5-1d33-4171-b27f-78a335f1590b-kube-api-access-ctsqs\") pod \"authentication-operator-7c6989d6c4-6wlgj\" (UID: \"e2e38be5-1d33-4171-b27f-78a335f1590b\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-6wlgj" Mar 09 16:46:16.579321 master-0 kubenswrapper[32968]: I0309 16:46:16.579236 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgk6\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-kube-api-access-psgk6\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:16.604895 master-0 kubenswrapper[32968]: I0309 16:46:16.604834 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-bound-sa-token\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:16.607754 master-0 kubenswrapper[32968]: I0309 16:46:16.607706 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 09 16:46:16.626533 master-0 kubenswrapper[32968]: I0309 16:46:16.626472 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-gpmvf" Mar 09 16:46:16.647412 master-0 kubenswrapper[32968]: I0309 16:46:16.647341 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 09 16:46:16.666463 master-0 kubenswrapper[32968]: I0309 16:46:16.666346 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 09 16:46:16.686060 master-0 kubenswrapper[32968]: I0309 16:46:16.685989 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 09 16:46:16.719580 master-0 kubenswrapper[32968]: I0309 16:46:16.719376 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrlr\" (UniqueName: \"kubernetes.io/projected/004d1e93-2345-4e62-902c-33f9dbb0f397-kube-api-access-hkrlr\") pod \"cluster-monitoring-operator-674cbfbd9d-8lvt9\" (UID: \"004d1e93-2345-4e62-902c-33f9dbb0f397\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-8lvt9" Mar 09 16:46:16.741685 master-0 kubenswrapper[32968]: I0309 16:46:16.741583 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5trxh\" (UniqueName: \"kubernetes.io/projected/f606b775-bf22-4d64-abb4-8e0e24ddb5cd-kube-api-access-5trxh\") pod \"ingress-operator-677db989d6-xtmhw\" (UID: \"f606b775-bf22-4d64-abb4-8e0e24ddb5cd\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" Mar 09 16:46:16.741685 master-0 kubenswrapper[32968]: I0309 16:46:16.741631 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:16.747289 master-0 kubenswrapper[32968]: I0309 16:46:16.747241 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:16.756637 master-0 kubenswrapper[32968]: I0309 16:46:16.756516 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e765395-7c6b-4cba-9a5a-37ba888722bb-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-dd2j5\" (UID: \"2e765395-7c6b-4cba-9a5a-37ba888722bb\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-dd2j5" Mar 09 16:46:16.779363 master-0 kubenswrapper[32968]: I0309 16:46:16.779256 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:16.779363 master-0 kubenswrapper[32968]: I0309 16:46:16.779370 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779417 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779483 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779525 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779560 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779595 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779641 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779675 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779714 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779753 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779792 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:16.779853 master-0 kubenswrapper[32968]: I0309 16:46:16.779822 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.779888 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.779944 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.779989 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780045 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780079 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780110 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780133 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780255 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780327 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:16.780384 master-0 kubenswrapper[32968]: I0309 16:46:16.780392 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.780835 master-0 kubenswrapper[32968]: I0309 16:46:16.780779 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:16.780835 master-0 kubenswrapper[32968]: I0309 16:46:16.780824 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:16.780924 master-0 kubenswrapper[32968]: I0309 16:46:16.780879 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:16.780972 master-0 kubenswrapper[32968]: I0309 16:46:16.780943 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:16.781033 master-0 kubenswrapper[32968]: I0309 16:46:16.781004 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:16.781092 master-0 kubenswrapper[32968]: I0309 16:46:16.781045 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:16.781155 master-0 kubenswrapper[32968]: I0309 16:46:16.781131 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:16.781208 master-0 kubenswrapper[32968]: I0309 16:46:16.781168 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:16.781246 master-0 kubenswrapper[32968]: I0309 16:46:16.781212 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:16.781292 master-0 kubenswrapper[32968]: I0309 16:46:16.781247 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kdqvv\" (UID: \"e346cb5b-411d-4014-a8d0-590d8deee8ac\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:46:16.781462 master-0 kubenswrapper[32968]: I0309 16:46:16.781317 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.782503 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-stats-auth\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.782611 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d6b4992e-50f3-473c-aa83-ed35569ba307-images\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.782993 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-service-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.783341 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-service-ca-bundle\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784103 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34c0b60e-da69-452d-858d-0af77f18946d-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784213 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784254 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784499 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784585 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784684 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784718 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784755 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.784972 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785095 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785180 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785251 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785308 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785364 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785403 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785487 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785522 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-metrics-certs\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785529 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785670 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.785712 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786006 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/357570a4-f69b-4970-9b6f-fe06fc4c2f90-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786027 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786165 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786301 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8be2517a-6f28-4289-a108-6e3345a1e246-serving-cert\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786315 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786415 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786476 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786531 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786576 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786619 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786644 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-default-certificate\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786672 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786768 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:16.786865 master-0 kubenswrapper[32968]: I0309 16:46:16.786917 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:16.788784 master-0 kubenswrapper[32968]: I0309 16:46:16.786980 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:16.788784 master-0 kubenswrapper[32968]: I0309 16:46:16.787013 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8be2517a-6f28-4289-a108-6e3345a1e246-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:16.788784 master-0 kubenswrapper[32968]: I0309 16:46:16.787035 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:46:16.790131 master-0 kubenswrapper[32968]: I0309 16:46:16.789989 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfj7p\" (UniqueName: \"kubernetes.io/projected/df2ec8b2-02d7-40c4-ac20-32615d689697-kube-api-access-rfj7p\") pod \"multus-gfqq8\" (UID: \"df2ec8b2-02d7-40c4-ac20-32615d689697\") " pod="openshift-multus/multus-gfqq8" Mar 09 16:46:16.798919 master-0 kubenswrapper[32968]: I0309 16:46:16.798838 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcqb\" (UniqueName: \"kubernetes.io/projected/d15da434-241d-4a93-9ce3-f943d43bf2ce-kube-api-access-vqcqb\") pod \"catalog-operator-7d9c49f57b-hv8xl\" (UID: \"d15da434-241d-4a93-9ce3-f943d43bf2ce\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:46:16.822923 master-0 kubenswrapper[32968]: I0309 16:46:16.822765 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv95c\" (UniqueName: \"kubernetes.io/projected/a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a-kube-api-access-fv95c\") pod \"service-ca-operator-69b6fc6b88-j99pw\" (UID: \"a1f8ec87-ff04-4c3e-afe8-1b7898b22a0a\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-j99pw" Mar 09 16:46:16.844766 master-0 kubenswrapper[32968]: I0309 16:46:16.844678 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dfn\" (UniqueName: \"kubernetes.io/projected/ef122f26-bfae-44d2-a70a-8507b3b47332-kube-api-access-p9dfn\") pod \"network-metrics-daemon-n7slb\" (UID: \"ef122f26-bfae-44d2-a70a-8507b3b47332\") " pod="openshift-multus/network-metrics-daemon-n7slb" Mar 09 16:46:16.844987 master-0 kubenswrapper[32968]: I0309 16:46:16.844872 32968 scope.go:117] "RemoveContainer" containerID="3d7055bdebb8473ed8f1d9e2d8ef3e1bf9615178ce3487bd7136c778ee63a023" Mar 09 16:46:16.859212 master-0 kubenswrapper[32968]: I0309 16:46:16.859150 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqvw\" (UniqueName: \"kubernetes.io/projected/e4895f22-8fcd-4ace-96d8-bc2e18a67891-kube-api-access-whqvw\") pod \"ovnkube-control-plane-66b55d57d-5b62x\" (UID: \"e4895f22-8fcd-4ace-96d8-bc2e18a67891\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-5b62x" Mar 09 16:46:16.877472 master-0 kubenswrapper[32968]: I0309 16:46:16.877388 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sst4g\" (UniqueName: \"kubernetes.io/projected/dc732d23-37bc-41c2-9f9b-333ba517c1f8-kube-api-access-sst4g\") pod \"cluster-node-tuning-operator-66c7586884-gglsc\" (UID: \"dc732d23-37bc-41c2-9f9b-333ba517c1f8\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-gglsc" Mar 09 16:46:16.895837 master-0 kubenswrapper[32968]: I0309 16:46:16.895776 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zwh\" (UniqueName: \"kubernetes.io/projected/fa7f88a3-9845-49a3-a108-d524df592961-kube-api-access-55zwh\") pod \"cluster-baremetal-operator-5cdb4c5598-p27tf\" (UID: \"fa7f88a3-9845-49a3-a108-d524df592961\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-p27tf" Mar 09 16:46:16.917812 master-0 kubenswrapper[32968]: I0309 16:46:16.917739 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkjv9\" (UniqueName: \"kubernetes.io/projected/166fdeb9-c79f-4d99-8a6b-3f5c43398e9d-kube-api-access-xkjv9\") pod \"openshift-apiserver-operator-799b6db4d7-w4z75\" (UID: \"166fdeb9-c79f-4d99-8a6b-3f5c43398e9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-w4z75" Mar 09 16:46:16.938406 master-0 kubenswrapper[32968]: I0309 16:46:16.938352 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7dv\" (UniqueName: \"kubernetes.io/projected/d2d3c20a-f92e-433b-9fbc-b667b7bcf175-kube-api-access-nl7dv\") pod \"openshift-controller-manager-operator-8565d84698-nmvdk\" (UID: \"d2d3c20a-f92e-433b-9fbc-b667b7bcf175\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-nmvdk" Mar 09 16:46:16.962808 master-0 kubenswrapper[32968]: I0309 16:46:16.962742 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnw68\" (UniqueName: \"kubernetes.io/projected/1ba020e0-1728-4e56-9618-d0ec3d9126eb-kube-api-access-tnw68\") pod \"multus-additional-cni-plugins-jkhls\" (UID: \"1ba020e0-1728-4e56-9618-d0ec3d9126eb\") " pod="openshift-multus/multus-additional-cni-plugins-jkhls" Mar 09 16:46:16.964832 master-0 kubenswrapper[32968]: I0309 16:46:16.964779 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 09 16:46:16.970679 master-0 kubenswrapper[32968]: I0309 16:46:16.970530 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-apiservice-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:16.972755 master-0 kubenswrapper[32968]: I0309 16:46:16.972632 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8796f37c-d1ec-469d-90df-e007bf620e8c-webhook-cert\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:16.988129 master-0 kubenswrapper[32968]: I0309 16:46:16.987858 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-wsmcd" Mar 09 16:46:17.004314 master-0 kubenswrapper[32968]: I0309 16:46:17.004253 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-jfns5" Mar 09 16:46:17.040013 master-0 kubenswrapper[32968]: I0309 16:46:17.039946 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdmsj\" (UniqueName: \"kubernetes.io/projected/6c4dfdcc-e182-4831-98e4-1eedb069bcf6-kube-api-access-bdmsj\") pod \"etcd-operator-5884b9cd56-k7rrt\" (UID: \"6c4dfdcc-e182-4831-98e4-1eedb069bcf6\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-k7rrt" Mar 09 16:46:17.059282 master-0 kubenswrapper[32968]: I0309 16:46:17.059215 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98j7c\" (UniqueName: \"kubernetes.io/projected/f965b971-7e9a-4513-8450-b2b527609bd6-kube-api-access-98j7c\") pod \"package-server-manager-854648ff6d-fqwtv\" (UID: \"f965b971-7e9a-4513-8450-b2b527609bd6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:46:17.080270 master-0 kubenswrapper[32968]: I0309 16:46:17.080203 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr46z\" (UniqueName: \"kubernetes.io/projected/be86c85d-59b1-4279-8253-a998ca16cd4d-kube-api-access-pr46z\") pod \"olm-operator-d64cfc9db-qtmrd\" (UID: \"be86c85d-59b1-4279-8253-a998ca16cd4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:46:17.098503 master-0 kubenswrapper[32968]: I0309 16:46:17.098458 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p2nd\" (UniqueName: \"kubernetes.io/projected/72739f4d-da25-493b-91ef-d2b64e9297dd-kube-api-access-4p2nd\") pod \"dns-operator-589895fbb7-6sknh\" (UID: \"72739f4d-da25-493b-91ef-d2b64e9297dd\") " pod="openshift-dns-operator/dns-operator-589895fbb7-6sknh" Mar 09 16:46:17.102119 master-0 kubenswrapper[32968]: I0309 16:46:17.102062 32968 request.go:700] Waited for 2.014675341s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0 Mar 09 16:46:17.103880 master-0 kubenswrapper[32968]: I0309 16:46:17.103831 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 09 16:46:17.113569 master-0 kubenswrapper[32968]: I0309 16:46:17.113494 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a320d845-3a5d-4027-a765-f0b2dc07f9de-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:17.128877 master-0 kubenswrapper[32968]: I0309 16:46:17.128826 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 09 16:46:17.135663 master-0 kubenswrapper[32968]: I0309 16:46:17.135624 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a320d845-3a5d-4027-a765-f0b2dc07f9de-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:17.144438 master-0 kubenswrapper[32968]: I0309 16:46:17.144392 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 09 16:46:17.162518 master-0 kubenswrapper[32968]: I0309 16:46:17.162462 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:17.164452 master-0 kubenswrapper[32968]: I0309 16:46:17.164411 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 09 16:46:17.165583 master-0 kubenswrapper[32968]: I0309 16:46:17.165552 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-cert\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:17.166267 master-0 kubenswrapper[32968]: I0309 16:46:17.166200 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:17.184446 master-0 kubenswrapper[32968]: I0309 16:46:17.184341 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-hgcd7" Mar 09 16:46:17.204542 master-0 kubenswrapper[32968]: I0309 16:46:17.204497 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 09 16:46:17.205049 master-0 kubenswrapper[32968]: I0309 16:46:17.205002 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:17.248444 master-0 kubenswrapper[32968]: I0309 16:46:17.248268 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 09 16:46:17.264711 master-0 kubenswrapper[32968]: I0309 16:46:17.264648 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 09 16:46:17.268006 master-0 kubenswrapper[32968]: I0309 16:46:17.267955 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z242f\" (UniqueName: \"kubernetes.io/projected/a62ba179-443d-424f-8cff-c75677e8cd5c-kube-api-access-z242f\") pod \"csi-snapshot-controller-operator-5685fbc7d-t42zc\" (UID: \"a62ba179-443d-424f-8cff-c75677e8cd5c\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-t42zc" Mar 09 16:46:17.268143 master-0 kubenswrapper[32968]: I0309 16:46:17.268002 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-config\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:17.286254 master-0 kubenswrapper[32968]: I0309 16:46:17.286187 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 09 16:46:17.294255 master-0 kubenswrapper[32968]: I0309 16:46:17.294178 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a6cd9347-eec9-4549-9de4-6033112634ce-images\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:17.306844 master-0 kubenswrapper[32968]: I0309 16:46:17.306758 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 09 16:46:17.314054 master-0 kubenswrapper[32968]: I0309 16:46:17.314001 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3ec3050d-8e6f-466a-995a-f78270408a85-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:17.338680 master-0 kubenswrapper[32968]: I0309 16:46:17.338621 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-497s5\" (UniqueName: \"kubernetes.io/projected/457f42a7-f14c-4d61-a87a-bc1ed422feed-kube-api-access-497s5\") pod \"openshift-config-operator-64488f9d78-xzwh9\" (UID: \"457f42a7-f14c-4d61-a87a-bc1ed422feed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:17.344734 master-0 kubenswrapper[32968]: I0309 16:46:17.344669 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 09 16:46:17.354962 master-0 kubenswrapper[32968]: I0309 16:46:17.354904 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6cd9347-eec9-4549-9de4-6033112634ce-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:17.379120 master-0 kubenswrapper[32968]: I0309 16:46:17.379048 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782hr\" (UniqueName: \"kubernetes.io/projected/5b9030c9-7f5f-4e54-ae93-140469e3558b-kube-api-access-782hr\") pod \"marketplace-operator-64bf9778cb-vh6m4\" (UID: \"5b9030c9-7f5f-4e54-ae93-140469e3558b\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:17.384661 master-0 kubenswrapper[32968]: I0309 16:46:17.384596 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-chm9n" Mar 09 16:46:17.417710 master-0 kubenswrapper[32968]: I0309 16:46:17.417646 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6912539-9b06-4e2c-b6a8-155df31147f2-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-2hb9x\" (UID: \"d6912539-9b06-4e2c-b6a8-155df31147f2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-2hb9x" Mar 09 16:46:17.425195 master-0 kubenswrapper[32968]: I0309 16:46:17.425132 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kz284" Mar 09 16:46:17.444057 master-0 kubenswrapper[32968]: I0309 16:46:17.443989 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-8bw78" Mar 09 16:46:17.444382 master-0 kubenswrapper[32968]: I0309 16:46:17.444340 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:17.452337 master-0 kubenswrapper[32968]: I0309 16:46:17.452285 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-xzwh9" Mar 09 16:46:17.463744 master-0 kubenswrapper[32968]: I0309 16:46:17.463690 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 09 16:46:17.466142 master-0 kubenswrapper[32968]: I0309 16:46:17.466085 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:17.488605 master-0 kubenswrapper[32968]: I0309 16:46:17.488504 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 09 16:46:17.492647 master-0 kubenswrapper[32968]: I0309 16:46:17.492529 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3ec3050d-8e6f-466a-995a-f78270408a85-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:17.504721 master-0 kubenswrapper[32968]: I0309 16:46:17.503888 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 09 16:46:17.521242 master-0 kubenswrapper[32968]: I0309 16:46:17.521174 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-xtmhw_f606b775-bf22-4d64-abb4-8e0e24ddb5cd/ingress-operator/5.log" Mar 09 16:46:17.521882 master-0 kubenswrapper[32968]: I0309 16:46:17.521796 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-xtmhw" event={"ID":"f606b775-bf22-4d64-abb4-8e0e24ddb5cd","Type":"ContainerStarted","Data":"a3fb87f301845ddf20f15ddb6dcbd56264f7a409fb78385a71997c0fb72093e1"} Mar 09 16:46:17.524320 master-0 kubenswrapper[32968]: I0309 16:46:17.524291 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 09 16:46:17.549694 master-0 kubenswrapper[32968]: I0309 16:46:17.549584 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-db2vj" Mar 09 16:46:17.581893 master-0 kubenswrapper[32968]: I0309 16:46:17.581806 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j244n\" (UniqueName: \"kubernetes.io/projected/3a612208-f777-486f-9dde-048b2d898c7f-kube-api-access-j244n\") pod \"kube-storage-version-migrator-operator-7f65c457f5-hz8tp\" (UID: \"3a612208-f777-486f-9dde-048b2d898c7f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-hz8tp" Mar 09 16:46:17.598056 master-0 kubenswrapper[32968]: I0309 16:46:17.597980 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxck\" (UniqueName: \"kubernetes.io/projected/1e97466a-7c33-4efb-a961-14024d913a21-kube-api-access-4zxck\") pod \"cluster-olm-operator-77899cf6d-4mr78\" (UID: \"1e97466a-7c33-4efb-a961-14024d913a21\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-4mr78" Mar 09 16:46:17.604365 master-0 kubenswrapper[32968]: I0309 16:46:17.604303 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-xhwgr" Mar 09 16:46:17.624667 master-0 kubenswrapper[32968]: I0309 16:46:17.624595 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 09 16:46:17.632876 master-0 kubenswrapper[32968]: I0309 16:46:17.632810 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/e346cb5b-411d-4014-a8d0-590d8deee8ac-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-kdqvv\" (UID: \"e346cb5b-411d-4014-a8d0-590d8deee8ac\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:46:17.644493 master-0 kubenswrapper[32968]: I0309 16:46:17.644403 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 09 16:46:17.652942 master-0 kubenswrapper[32968]: I0309 16:46:17.652838 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/baf704e3-daf2-4934-a04e-d31df8df0c4a-proxy-tls\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:17.664099 master-0 kubenswrapper[32968]: I0309 16:46:17.664035 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-5rw6v" Mar 09 16:46:17.684877 master-0 kubenswrapper[32968]: I0309 16:46:17.684807 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x9sm5" Mar 09 16:46:17.704532 master-0 kubenswrapper[32968]: I0309 16:46:17.704469 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-g4frj" Mar 09 16:46:17.726092 master-0 kubenswrapper[32968]: I0309 16:46:17.726031 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 09 16:46:17.734413 master-0 kubenswrapper[32968]: I0309 16:46:17.734353 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:46:17.744617 master-0 kubenswrapper[32968]: I0309 16:46:17.744565 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:46:17.763989 master-0 kubenswrapper[32968]: I0309 16:46:17.763867 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-mwccd" Mar 09 16:46:17.782732 master-0 kubenswrapper[32968]: E0309 16:46:17.782640 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.782732 master-0 kubenswrapper[32968]: E0309 16:46:17.782681 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782733 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782760 32968 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782765 32968 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782794 32968 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782745 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782853 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782806 32968 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782869 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782783 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls podName:8972b380-8f87-4b73-8f95-440d34d03884 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.782758444 +0000 UTC m=+4.886080984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls") pod "machine-config-controller-ff46b7bdf-xqpdd" (UID: "8972b380-8f87-4b73-8f95-440d34d03884") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.782973 master-0 kubenswrapper[32968]: E0309 16:46:17.782926 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.782999 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.782932269 +0000 UTC m=+4.886254809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783030 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783017891 +0000 UTC m=+4.886340631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783061 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783044992 +0000 UTC m=+4.886367532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783095 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token podName:82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783084913 +0000 UTC m=+4.886407453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token") pod "machine-config-server-7d5bx" (UID: "82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783119 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls podName:e5c4ccb0-f795-44bd-9bb4-baf84564c239 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783111173 +0000 UTC m=+4.886433713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-hnc7v" (UID: "e5c4ccb0-f795-44bd-9bb4-baf84564c239") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783147 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783137174 +0000 UTC m=+4.886459904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783178 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert podName:18f0164f-0875-4668-b155-df69e05e8ae0 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783167665 +0000 UTC m=+4.886490205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert") pod "ingress-canary-nxtms" (UID: "18f0164f-0875-4668-b155-df69e05e8ae0") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783204 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783193875 +0000 UTC m=+4.886516415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783240 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783227326 +0000 UTC m=+4.886549866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783278 master-0 kubenswrapper[32968]: E0309 16:46:17.783259 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783250087 +0000 UTC m=+4.886572837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783624 master-0 kubenswrapper[32968]: E0309 16:46:17.783343 32968 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-5k05m0jd20f8o: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783624 master-0 kubenswrapper[32968]: E0309 16:46:17.783374 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783624 master-0 kubenswrapper[32968]: E0309 16:46:17.783406 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.78338414 +0000 UTC m=+4.886706680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.783624 master-0 kubenswrapper[32968]: E0309 16:46:17.783443 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783413991 +0000 UTC m=+4.886736741 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783624 master-0 kubenswrapper[32968]: E0309 16:46:17.783460 32968 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.783624 master-0 kubenswrapper[32968]: E0309 16:46:17.783501 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca podName:8677cbd3-649f-41cd-8b8a-eadca971906b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.783492283 +0000 UTC m=+4.886814823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca") pod "route-controller-manager-675f85b8f7-bt9gb" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.784494 master-0 kubenswrapper[32968]: E0309 16:46:17.784458 32968 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784564 master-0 kubenswrapper[32968]: E0309 16:46:17.784515 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls podName:92bd7735-8e3c-43bb-b543-03e6e6c5142a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.78450382 +0000 UTC m=+4.887826360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-j9x6n" (UID: "92bd7735-8e3c-43bb-b543-03e6e6c5142a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784620 master-0 kubenswrapper[32968]: E0309 16:46:17.784596 32968 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784684 master-0 kubenswrapper[32968]: E0309 16:46:17.784648 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.784635233 +0000 UTC m=+4.887957773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784684 master-0 kubenswrapper[32968]: E0309 16:46:17.784675 32968 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784761 master-0 kubenswrapper[32968]: E0309 16:46:17.784701 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs podName:82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.784694775 +0000 UTC m=+4.888017315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs") pod "machine-config-server-7d5bx" (UID: "82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784761 master-0 kubenswrapper[32968]: E0309 16:46:17.784727 32968 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784761 master-0 kubenswrapper[32968]: E0309 16:46:17.784735 32968 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.784761 master-0 kubenswrapper[32968]: E0309 16:46:17.784743 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.784913 master-0 kubenswrapper[32968]: E0309 16:46:17.784755 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls podName:ea34ff7e-27fa-4c26-acc0-ec551985eb76 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.784747906 +0000 UTC m=+4.888070446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" (UID: "ea34ff7e-27fa-4c26-acc0-ec551985eb76") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784913 master-0 kubenswrapper[32968]: E0309 16:46:17.784748 32968 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.784913 master-0 kubenswrapper[32968]: I0309 16:46:17.784845 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 09 16:46:17.784913 master-0 kubenswrapper[32968]: E0309 16:46:17.784862 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images podName:ea34ff7e-27fa-4c26-acc0-ec551985eb76 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.784827428 +0000 UTC m=+4.888150138 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" (UID: "ea34ff7e-27fa-4c26-acc0-ec551985eb76") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.784913 master-0 kubenswrapper[32968]: E0309 16:46:17.784889 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.784880811 +0000 UTC m=+4.888203351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.784913 master-0 kubenswrapper[32968]: E0309 16:46:17.784913 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config podName:e5c4ccb0-f795-44bd-9bb4-baf84564c239 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.784901711 +0000 UTC m=+4.888224471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-hnc7v" (UID: "e5c4ccb0-f795-44bd-9bb4-baf84564c239") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.785100 master-0 kubenswrapper[32968]: E0309 16:46:17.785040 32968 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.785100 master-0 kubenswrapper[32968]: E0309 16:46:17.785084 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config podName:92bd7735-8e3c-43bb-b543-03e6e6c5142a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.785075396 +0000 UTC m=+4.888397936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-j9x6n" (UID: "92bd7735-8e3c-43bb-b543-03e6e6c5142a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.785915 master-0 kubenswrapper[32968]: E0309 16:46:17.785872 32968 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.785915 master-0 kubenswrapper[32968]: E0309 16:46:17.785883 32968 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785919 32968 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785927 32968 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785921 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert podName:8677cbd3-649f-41cd-8b8a-eadca971906b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.785911118 +0000 UTC m=+4.889233658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert") pod "route-controller-manager-675f85b8f7-bt9gb" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785963 32968 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785988 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.785977019 +0000 UTC m=+4.889299559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785894 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.786003 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.78599767 +0000 UTC m=+4.889320210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.785999 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786015 master-0 kubenswrapper[32968]: E0309 16:46:17.786021 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.78601481 +0000 UTC m=+4.889337350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.786044 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.786029821 +0000 UTC m=+4.889352361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.786066 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config podName:8677cbd3-649f-41cd-8b8a-eadca971906b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.786054581 +0000 UTC m=+4.889377111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config") pod "route-controller-manager-675f85b8f7-bt9gb" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.786082 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca podName:92bd7735-8e3c-43bb-b543-03e6e6c5142a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.786074602 +0000 UTC m=+4.889397142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-j9x6n" (UID: "92bd7735-8e3c-43bb-b543-03e6e6c5142a") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.785849 32968 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.786169 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.786146934 +0000 UTC m=+4.889469644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.786175 32968 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.786251 master-0 kubenswrapper[32968]: E0309 16:46:17.786211 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.786204555 +0000 UTC m=+4.889527095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.787149 master-0 kubenswrapper[32968]: E0309 16:46:17.787120 32968 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.787198 master-0 kubenswrapper[32968]: E0309 16:46:17.787153 32968 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.787198 master-0 kubenswrapper[32968]: E0309 16:46:17.787175 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config podName:79a8ea87-c29a-4248-927f-6f1acfc494d7 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.787161211 +0000 UTC m=+4.890483751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-d4f6dc665-658vm" (UID: "79a8ea87-c29a-4248-927f-6f1acfc494d7") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.787198 master-0 kubenswrapper[32968]: E0309 16:46:17.787200 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config podName:ea34ff7e-27fa-4c26-acc0-ec551985eb76 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.787189571 +0000 UTC m=+4.890512111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" (UID: "ea34ff7e-27fa-4c26-acc0-ec551985eb76") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.787294 master-0 kubenswrapper[32968]: E0309 16:46:17.787213 32968 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.787294 master-0 kubenswrapper[32968]: E0309 16:46:17.787238 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca podName:e5c4ccb0-f795-44bd-9bb4-baf84564c239 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.787231332 +0000 UTC m=+4.890553872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca") pod "prometheus-operator-5ff8674d55-hnc7v" (UID: "e5c4ccb0-f795-44bd-9bb4-baf84564c239") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.787511 master-0 kubenswrapper[32968]: E0309 16:46:17.787483 32968 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.787555 master-0 kubenswrapper[32968]: E0309 16:46:17.787511 32968 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.787555 master-0 kubenswrapper[32968]: E0309 16:46:17.787535 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs podName:e91a0e23-c95b-4290-9c0c-29101febfc8f nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.78752567 +0000 UTC m=+4.890848200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs") pod "multus-admission-controller-7769569c45-jcsfw" (UID: "e91a0e23-c95b-4290-9c0c-29101febfc8f") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.787680 master-0 kubenswrapper[32968]: E0309 16:46:17.787574 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls podName:ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.787556101 +0000 UTC m=+4.890878831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls") pod "metrics-server-7c4558858-9rclt" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.789582 master-0 kubenswrapper[32968]: E0309 16:46:17.789547 32968 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.789628 master-0 kubenswrapper[32968]: E0309 16:46:17.789579 32968 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.789628 master-0 kubenswrapper[32968]: E0309 16:46:17.789598 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config podName:b9fc9e7d-652c-4063-9cdb-358f58cae29a nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.789587445 +0000 UTC m=+4.892909985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config") pod "node-exporter-qjk4k" (UID: "b9fc9e7d-652c-4063-9cdb-358f58cae29a") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.789628 master-0 kubenswrapper[32968]: E0309 16:46:17.789606 32968 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.789728 master-0 kubenswrapper[32968]: E0309 16:46:17.789630 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config podName:ebbec674-ac49-422a-9548-5c29b15ad44d nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.789619606 +0000 UTC m=+4.892942146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-cwzvv" (UID: "ebbec674-ac49-422a-9548-5c29b15ad44d") : failed to sync secret cache: timed out waiting for the condition Mar 09 16:46:17.789728 master-0 kubenswrapper[32968]: E0309 16:46:17.789652 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.789641887 +0000 UTC m=+4.892964427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.789787 master-0 kubenswrapper[32968]: E0309 16:46:17.789736 32968 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.789893 master-0 kubenswrapper[32968]: E0309 16:46:17.789864 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles podName:7d1143ae-d94a-43f2-8e75-95aae13a5c57 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:18.789847822 +0000 UTC m=+4.893170552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles") pod "controller-manager-5c5964c98f-tm4pb" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57") : failed to sync configmap cache: timed out waiting for the condition Mar 09 16:46:17.804284 master-0 kubenswrapper[32968]: I0309 16:46:17.804211 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 09 16:46:17.825971 master-0 kubenswrapper[32968]: I0309 16:46:17.825916 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-gkx8f" Mar 09 16:46:17.843916 master-0 kubenswrapper[32968]: I0309 16:46:17.843864 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 09 16:46:17.865044 master-0 kubenswrapper[32968]: I0309 16:46:17.864985 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 09 16:46:17.885160 master-0 kubenswrapper[32968]: I0309 16:46:17.885105 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:46:17.904975 master-0 kubenswrapper[32968]: I0309 16:46:17.904905 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 09 16:46:17.925111 master-0 kubenswrapper[32968]: I0309 16:46:17.925040 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-nzqc5" Mar 09 16:46:17.945068 master-0 kubenswrapper[32968]: I0309 16:46:17.944995 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 09 16:46:17.963901 master-0 kubenswrapper[32968]: I0309 16:46:17.963815 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 09 16:46:17.984851 master-0 kubenswrapper[32968]: I0309 16:46:17.984776 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-n45mc" Mar 09 16:46:18.007903 master-0 kubenswrapper[32968]: I0309 16:46:18.007843 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 09 16:46:18.025360 master-0 kubenswrapper[32968]: I0309 16:46:18.025149 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 09 16:46:18.045090 master-0 kubenswrapper[32968]: I0309 16:46:18.045025 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 09 16:46:18.066629 master-0 kubenswrapper[32968]: I0309 16:46:18.066562 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vp2pt" Mar 09 16:46:18.083941 master-0 kubenswrapper[32968]: I0309 16:46:18.083790 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 09 16:46:18.102620 master-0 kubenswrapper[32968]: I0309 16:46:18.102548 32968 request.go:700] Waited for 3.004214684s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dnode-exporter-dockercfg-9j6gd&limit=500&resourceVersion=0 Mar 09 16:46:18.105056 master-0 kubenswrapper[32968]: I0309 16:46:18.104997 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9j6gd" Mar 09 16:46:18.124800 master-0 kubenswrapper[32968]: I0309 16:46:18.124731 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 09 16:46:18.144831 master-0 kubenswrapper[32968]: I0309 16:46:18.144648 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 09 16:46:18.164313 master-0 kubenswrapper[32968]: I0309 16:46:18.164239 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-h7zpd" Mar 09 16:46:18.185817 master-0 kubenswrapper[32968]: I0309 16:46:18.185403 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 09 16:46:18.204803 master-0 kubenswrapper[32968]: I0309 16:46:18.204733 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 09 16:46:18.225954 master-0 kubenswrapper[32968]: I0309 16:46:18.225894 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 09 16:46:18.245594 master-0 kubenswrapper[32968]: I0309 16:46:18.245139 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-ns927" Mar 09 16:46:18.267549 master-0 kubenswrapper[32968]: I0309 16:46:18.267486 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 09 16:46:18.284875 master-0 kubenswrapper[32968]: I0309 16:46:18.284671 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5k05m0jd20f8o" Mar 09 16:46:18.306501 master-0 kubenswrapper[32968]: I0309 16:46:18.306319 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 09 16:46:18.325657 master-0 kubenswrapper[32968]: I0309 16:46:18.325597 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 09 16:46:18.346585 master-0 kubenswrapper[32968]: I0309 16:46:18.346519 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 09 16:46:18.364271 master-0 kubenswrapper[32968]: I0309 16:46:18.364206 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58glv" Mar 09 16:46:18.399192 master-0 kubenswrapper[32968]: I0309 16:46:18.398583 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:46:18.404517 master-0 kubenswrapper[32968]: I0309 16:46:18.404471 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 09 16:46:18.425026 master-0 kubenswrapper[32968]: I0309 16:46:18.424218 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:46:18.447734 master-0 kubenswrapper[32968]: I0309 16:46:18.447573 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:46:18.468232 master-0 kubenswrapper[32968]: I0309 16:46:18.467895 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:46:18.485966 master-0 kubenswrapper[32968]: I0309 16:46:18.485540 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:46:18.504329 master-0 kubenswrapper[32968]: I0309 16:46:18.504225 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-96tct" Mar 09 16:46:18.538600 master-0 kubenswrapper[32968]: I0309 16:46:18.528031 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:46:18.548956 master-0 kubenswrapper[32968]: I0309 16:46:18.548882 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 09 16:46:18.571699 master-0 kubenswrapper[32968]: I0309 16:46:18.571629 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 09 16:46:18.586008 master-0 kubenswrapper[32968]: I0309 16:46:18.584065 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:46:18.606692 master-0 kubenswrapper[32968]: I0309 16:46:18.604900 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-4n2zt" Mar 09 16:46:18.630002 master-0 kubenswrapper[32968]: I0309 16:46:18.629637 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:46:18.647456 master-0 kubenswrapper[32968]: I0309 16:46:18.644694 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:46:18.671764 master-0 kubenswrapper[32968]: I0309 16:46:18.671703 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:46:18.689560 master-0 kubenswrapper[32968]: I0309 16:46:18.688957 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:46:18.703929 master-0 kubenswrapper[32968]: I0309 16:46:18.703860 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 09 16:46:18.725282 master-0 kubenswrapper[32968]: I0309 16:46:18.724226 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-69c4t" Mar 09 16:46:18.746098 master-0 kubenswrapper[32968]: I0309 16:46:18.746033 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 09 16:46:18.765994 master-0 kubenswrapper[32968]: I0309 16:46:18.765924 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7zp7c" Mar 09 16:46:18.785079 master-0 kubenswrapper[32968]: I0309 16:46:18.785018 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 09 16:46:18.804985 master-0 kubenswrapper[32968]: I0309 16:46:18.804823 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 09 16:46:18.833328 master-0 kubenswrapper[32968]: I0309 16:46:18.833269 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.833647 master-0 kubenswrapper[32968]: I0309 16:46:18.833467 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:18.833647 master-0 kubenswrapper[32968]: I0309 16:46:18.833518 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:46:18.833647 master-0 kubenswrapper[32968]: I0309 16:46:18.833561 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:18.833821 master-0 kubenswrapper[32968]: I0309 16:46:18.833760 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:18.833886 master-0 kubenswrapper[32968]: I0309 16:46:18.833791 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 09 16:46:18.833886 master-0 kubenswrapper[32968]: I0309 16:46:18.833831 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:18.833886 master-0 kubenswrapper[32968]: I0309 16:46:18.833874 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:18.834011 master-0 kubenswrapper[32968]: I0309 16:46:18.833922 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.834011 master-0 kubenswrapper[32968]: I0309 16:46:18.833952 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e91a0e23-c95b-4290-9c0c-29101febfc8f-webhook-certs\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:46:18.834229 master-0 kubenswrapper[32968]: I0309 16:46:18.834194 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.834350 master-0 kubenswrapper[32968]: I0309 16:46:18.834319 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:18.834582 master-0 kubenswrapper[32968]: I0309 16:46:18.834551 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:18.834645 master-0 kubenswrapper[32968]: I0309 16:46:18.834588 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.834645 master-0 kubenswrapper[32968]: I0309 16:46:18.834631 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:18.834796 master-0 kubenswrapper[32968]: I0309 16:46:18.834750 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:18.834851 master-0 kubenswrapper[32968]: I0309 16:46:18.834789 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9fc9e7d-652c-4063-9cdb-358f58cae29a-metrics-client-ca\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:18.834899 master-0 kubenswrapper[32968]: I0309 16:46:18.834866 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.834941 master-0 kubenswrapper[32968]: I0309 16:46:18.834892 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:18.834941 master-0 kubenswrapper[32968]: I0309 16:46:18.834934 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.835021 master-0 kubenswrapper[32968]: I0309 16:46:18.834955 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.835021 master-0 kubenswrapper[32968]: I0309 16:46:18.834961 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:18.835178 master-0 kubenswrapper[32968]: I0309 16:46:18.835146 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.835236 master-0 kubenswrapper[32968]: I0309 16:46:18.835187 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.835236 master-0 kubenswrapper[32968]: I0309 16:46:18.835210 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-node-bootstrap-token\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:18.835236 master-0 kubenswrapper[32968]: I0309 16:46:18.835215 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.835352 master-0 kubenswrapper[32968]: I0309 16:46:18.835254 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:46:18.835352 master-0 kubenswrapper[32968]: I0309 16:46:18.835283 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.835480 master-0 kubenswrapper[32968]: I0309 16:46:18.835384 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.835534 master-0 kubenswrapper[32968]: I0309 16:46:18.835508 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:18.835587 master-0 kubenswrapper[32968]: I0309 16:46:18.835545 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.835640 master-0 kubenswrapper[32968]: I0309 16:46:18.835617 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:18.835679 master-0 kubenswrapper[32968]: I0309 16:46:18.835644 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.835679 master-0 kubenswrapper[32968]: I0309 16:46:18.835647 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:18.835940 master-0 kubenswrapper[32968]: I0309 16:46:18.835904 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.836004 master-0 kubenswrapper[32968]: I0309 16:46:18.835397 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-metrics-client-ca\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.836004 master-0 kubenswrapper[32968]: I0309 16:46:18.835992 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8972b380-8f87-4b73-8f95-440d34d03884-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:18.836090 master-0 kubenswrapper[32968]: I0309 16:46:18.836075 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:18.836131 master-0 kubenswrapper[32968]: I0309 16:46:18.836112 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:18.836212 master-0 kubenswrapper[32968]: I0309 16:46:18.836178 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/18f0164f-0875-4668-b155-df69e05e8ae0-cert\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:46:18.836212 master-0 kubenswrapper[32968]: I0309 16:46:18.836199 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:18.836308 master-0 kubenswrapper[32968]: I0309 16:46:18.836279 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.836368 master-0 kubenswrapper[32968]: I0309 16:46:18.836313 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.836368 master-0 kubenswrapper[32968]: I0309 16:46:18.836347 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.836468 master-0 kubenswrapper[32968]: I0309 16:46:18.836381 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:18.836468 master-0 kubenswrapper[32968]: I0309 16:46:18.836453 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.836549 master-0 kubenswrapper[32968]: I0309 16:46:18.836479 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.836549 master-0 kubenswrapper[32968]: I0309 16:46:18.836489 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.836628 master-0 kubenswrapper[32968]: I0309 16:46:18.836559 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:18.836667 master-0 kubenswrapper[32968]: I0309 16:46:18.836643 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:18.836729 master-0 kubenswrapper[32968]: I0309 16:46:18.836662 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/92bd7735-8e3c-43bb-b543-03e6e6c5142a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:18.836729 master-0 kubenswrapper[32968]: I0309 16:46:18.836698 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.836887 master-0 kubenswrapper[32968]: I0309 16:46:18.836858 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea34ff7e-27fa-4c26-acc0-ec551985eb76-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:18.837044 master-0 kubenswrapper[32968]: I0309 16:46:18.837013 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.837104 master-0 kubenswrapper[32968]: I0309 16:46:18.837061 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.837466 master-0 kubenswrapper[32968]: I0309 16:46:18.837154 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:18.837536 master-0 kubenswrapper[32968]: I0309 16:46:18.837479 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:18.837536 master-0 kubenswrapper[32968]: I0309 16:46:18.837258 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.837536 master-0 kubenswrapper[32968]: I0309 16:46:18.837319 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b9fc9e7d-652c-4063-9cdb-358f58cae29a-node-exporter-tls\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:18.837659 master-0 kubenswrapper[32968]: I0309 16:46:18.837606 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.837744 master-0 kubenswrapper[32968]: I0309 16:46:18.837712 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e5c4ccb0-f795-44bd-9bb4-baf84564c239-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:18.837808 master-0 kubenswrapper[32968]: I0309 16:46:18.837773 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.837853 master-0 kubenswrapper[32968]: I0309 16:46:18.837825 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/92bd7735-8e3c-43bb-b543-03e6e6c5142a-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:18.837999 master-0 kubenswrapper[32968]: I0309 16:46:18.837962 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:18.838076 master-0 kubenswrapper[32968]: I0309 16:46:18.838032 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.838174 master-0 kubenswrapper[32968]: I0309 16:46:18.838137 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:18.838174 master-0 kubenswrapper[32968]: I0309 16:46:18.838162 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838215 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838262 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838299 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838319 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e5c4ccb0-f795-44bd-9bb4-baf84564c239-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838335 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838378 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.838552 master-0 kubenswrapper[32968]: I0309 16:46:18.838526 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea34ff7e-27fa-4c26-acc0-ec551985eb76-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:18.838831 master-0 kubenswrapper[32968]: I0309 16:46:18.838710 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.838831 master-0 kubenswrapper[32968]: I0309 16:46:18.838724 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.839058 master-0 kubenswrapper[32968]: I0309 16:46:18.838906 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:18.839121 master-0 kubenswrapper[32968]: I0309 16:46:18.839078 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-certs\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:18.839121 master-0 kubenswrapper[32968]: I0309 16:46:18.839092 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:18.845304 master-0 kubenswrapper[32968]: I0309 16:46:18.845251 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 09 16:46:18.846871 master-0 kubenswrapper[32968]: I0309 16:46:18.846788 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/79a8ea87-c29a-4248-927f-6f1acfc494d7-federate-client-tls\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.864869 master-0 kubenswrapper[32968]: I0309 16:46:18.864795 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 09 16:46:18.866053 master-0 kubenswrapper[32968]: I0309 16:46:18.866001 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8ea87-c29a-4248-927f-6f1acfc494d7-serving-certs-ca-bundle\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:18.884893 master-0 kubenswrapper[32968]: I0309 16:46:18.884820 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 09 16:46:18.905302 master-0 kubenswrapper[32968]: I0309 16:46:18.905232 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cshl6" Mar 09 16:46:18.945370 master-0 kubenswrapper[32968]: I0309 16:46:18.945289 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkxg\" (UniqueName: \"kubernetes.io/projected/8d1829b3-643f-4f79-b6de-ae6ca5e78d4a-kube-api-access-4gkxg\") pod \"cluster-autoscaler-operator-69576476f7-jzjhh\" (UID: \"8d1829b3-643f-4f79-b6de-ae6ca5e78d4a\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jzjhh" Mar 09 16:46:18.960716 master-0 kubenswrapper[32968]: I0309 16:46:18.960671 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhct\" (UniqueName: \"kubernetes.io/projected/ebbec674-ac49-422a-9548-5c29b15ad44d-kube-api-access-jrhct\") pod \"kube-state-metrics-68b88f8cb5-cwzvv\" (UID: \"ebbec674-ac49-422a-9548-5c29b15ad44d\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-cwzvv" Mar 09 16:46:18.980398 master-0 kubenswrapper[32968]: I0309 16:46:18.980328 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvh62\" (UniqueName: \"kubernetes.io/projected/60e07bf5-933c-4ff6-9a1a-2fd05392c8e9-kube-api-access-kvh62\") pod \"network-node-identity-nqwd2\" (UID: \"60e07bf5-933c-4ff6-9a1a-2fd05392c8e9\") " pod="openshift-network-node-identity/network-node-identity-nqwd2" Mar 09 16:46:19.004912 master-0 kubenswrapper[32968]: I0309 16:46:19.004838 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlnq\" (UniqueName: \"kubernetes.io/projected/73f1f0ba-f90e-45aa-b1ba-df011a5b9d56-kube-api-access-dxlnq\") pod \"router-default-79f8cd6fdd-rvnwf\" (UID: \"73f1f0ba-f90e-45aa-b1ba-df011a5b9d56\") " pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:19.017906 master-0 kubenswrapper[32968]: I0309 16:46:19.017824 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2bk\" (UniqueName: \"kubernetes.io/projected/18f0164f-0875-4668-b155-df69e05e8ae0-kube-api-access-pq2bk\") pod \"ingress-canary-nxtms\" (UID: \"18f0164f-0875-4668-b155-df69e05e8ae0\") " pod="openshift-ingress-canary/ingress-canary-nxtms" Mar 09 16:46:19.043521 master-0 kubenswrapper[32968]: I0309 16:46:19.043455 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-579rp\" (UniqueName: \"kubernetes.io/projected/709aad35-08ca-4ff5-abe5-e1558c8dc83f-kube-api-access-579rp\") pod \"iptables-alerter-g4tdb\" (UID: \"709aad35-08ca-4ff5-abe5-e1558c8dc83f\") " pod="openshift-network-operator/iptables-alerter-g4tdb" Mar 09 16:46:19.056924 master-0 kubenswrapper[32968]: I0309 16:46:19.056729 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:19.079858 master-0 kubenswrapper[32968]: I0309 16:46:19.079794 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvfgw\" (UniqueName: \"kubernetes.io/projected/e5c4ccb0-f795-44bd-9bb4-baf84564c239-kube-api-access-cvfgw\") pod \"prometheus-operator-5ff8674d55-hnc7v\" (UID: \"e5c4ccb0-f795-44bd-9bb4-baf84564c239\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-hnc7v" Mar 09 16:46:19.108062 master-0 kubenswrapper[32968]: I0309 16:46:19.107990 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7dea5-9848-41f0-bf0b-ec70ec0380f1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-sc9tf\" (UID: \"eaf7dea5-9848-41f0-bf0b-ec70ec0380f1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-sc9tf" Mar 09 16:46:19.122717 master-0 kubenswrapper[32968]: I0309 16:46:19.122367 32968 request.go:700] Waited for 3.928417838s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Mar 09 16:46:19.126386 master-0 kubenswrapper[32968]: I0309 16:46:19.126324 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjf4p\" (UniqueName: \"kubernetes.io/projected/9482fb93-c223-45ee-bde8-7667303270b6-kube-api-access-qjf4p\") pod \"network-check-source-7c67b67d47-d9wjb\" (UID: \"9482fb93-c223-45ee-bde8-7667303270b6\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-d9wjb" Mar 09 16:46:19.142836 master-0 kubenswrapper[32968]: I0309 16:46:19.142775 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcvbf\" (UniqueName: \"kubernetes.io/projected/a6cd9347-eec9-4549-9de4-6033112634ce-kube-api-access-lcvbf\") pod \"machine-api-operator-84bf6db4f9-4qg6v\" (UID: \"a6cd9347-eec9-4549-9de4-6033112634ce\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-4qg6v" Mar 09 16:46:19.174379 master-0 kubenswrapper[32968]: I0309 16:46:19.174295 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl697\" (UniqueName: \"kubernetes.io/projected/ea34ff7e-27fa-4c26-acc0-ec551985eb76-kube-api-access-fl697\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-zctw6\" (UID: \"ea34ff7e-27fa-4c26-acc0-ec551985eb76\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-zctw6" Mar 09 16:46:19.182366 master-0 kubenswrapper[32968]: I0309 16:46:19.182260 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4ff\" (UniqueName: \"kubernetes.io/projected/7937ccab-a6fb-4401-a4fd-7a2a91a7193f-kube-api-access-cm4ff\") pod \"network-check-target-ncskk\" (UID: \"7937ccab-a6fb-4401-a4fd-7a2a91a7193f\") " pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:46:19.204888 master-0 kubenswrapper[32968]: I0309 16:46:19.204817 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grmch\" (UniqueName: \"kubernetes.io/projected/f3033e86-fee0-45dc-ba74-d5448a777400-kube-api-access-grmch\") pod \"migrator-57ccdf9b5-4vd54\" (UID: \"f3033e86-fee0-45dc-ba74-d5448a777400\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-4vd54" Mar 09 16:46:19.219622 master-0 kubenswrapper[32968]: I0309 16:46:19.219568 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl5kt\" (UniqueName: \"kubernetes.io/projected/8c93fb5d-373d-4473-99dd-50e4398bafbf-kube-api-access-nl5kt\") pod \"apiserver-dc6bb954d-kxhv7\" (UID: \"8c93fb5d-373d-4473-99dd-50e4398bafbf\") " pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:19.261944 master-0 kubenswrapper[32968]: E0309 16:46:19.261865 32968 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 09 16:46:19.261944 master-0 kubenswrapper[32968]: E0309 16:46:19.261926 32968 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 09 16:46:19.262368 master-0 kubenswrapper[32968]: E0309 16:46:19.262047 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access podName:696fcca2-df1a-491d-956d-1cfda1ee5e70 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:19.762016568 +0000 UTC m=+5.865339108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access") pod "installer-4-master-0" (UID: "696fcca2-df1a-491d-956d-1cfda1ee5e70") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 09 16:46:19.264012 master-0 kubenswrapper[32968]: I0309 16:46:19.263959 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkfn\" (UniqueName: \"kubernetes.io/projected/79a8ea87-c29a-4248-927f-6f1acfc494d7-kube-api-access-rvkfn\") pod \"telemeter-client-d4f6dc665-658vm\" (UID: \"79a8ea87-c29a-4248-927f-6f1acfc494d7\") " pod="openshift-monitoring/telemeter-client-d4f6dc665-658vm" Mar 09 16:46:19.272837 master-0 kubenswrapper[32968]: I0309 16:46:19.272778 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:19.273011 master-0 kubenswrapper[32968]: I0309 16:46:19.272864 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:19.279951 master-0 kubenswrapper[32968]: I0309 16:46:19.279900 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rjs\" (UniqueName: \"kubernetes.io/projected/4a2aa6f3-f049-423a-a8f5-5d33fc214a7b-kube-api-access-p8rjs\") pod \"catalogd-controller-manager-7f8b8b6f4c-xrgml\" (UID: \"4a2aa6f3-f049-423a-a8f5-5d33fc214a7b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:19.281603 master-0 kubenswrapper[32968]: I0309 16:46:19.281560 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:19.312741 master-0 kubenswrapper[32968]: I0309 16:46:19.312578 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgl27\" (UniqueName: \"kubernetes.io/projected/1da6f189-535a-4bf1-bbdb-758327651ae2-kube-api-access-xgl27\") pod \"redhat-operators-49bwx\" (UID: \"1da6f189-535a-4bf1-bbdb-758327651ae2\") " pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:19.331245 master-0 kubenswrapper[32968]: I0309 16:46:19.331143 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2qw\" (UniqueName: \"kubernetes.io/projected/8796f37c-d1ec-469d-90df-e007bf620e8c-kube-api-access-6n2qw\") pod \"packageserver-775b84c99f-6ffjr\" (UID: \"8796f37c-d1ec-469d-90df-e007bf620e8c\") " pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:19.342101 master-0 kubenswrapper[32968]: I0309 16:46:19.342031 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-868cs\" (UniqueName: \"kubernetes.io/projected/a320d845-3a5d-4027-a765-f0b2dc07f9de-kube-api-access-868cs\") pod \"cloud-credential-operator-55d85b7b47-6zcn7\" (UID: \"a320d845-3a5d-4027-a765-f0b2dc07f9de\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-6zcn7" Mar 09 16:46:19.345786 master-0 kubenswrapper[32968]: I0309 16:46:19.345739 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access\") pod \"696fcca2-df1a-491d-956d-1cfda1ee5e70\" (UID: \"696fcca2-df1a-491d-956d-1cfda1ee5e70\") " Mar 09 16:46:19.350871 master-0 kubenswrapper[32968]: I0309 16:46:19.350296 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "696fcca2-df1a-491d-956d-1cfda1ee5e70" (UID: "696fcca2-df1a-491d-956d-1cfda1ee5e70"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:46:19.357256 master-0 kubenswrapper[32968]: I0309 16:46:19.356539 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbkx\" (UniqueName: \"kubernetes.io/projected/3ec3050d-8e6f-466a-995a-f78270408a85-kube-api-access-qsbkx\") pod \"machine-approver-754bdc9f9d-pfbvg\" (UID: \"3ec3050d-8e6f-466a-995a-f78270408a85\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-pfbvg" Mar 09 16:46:19.388380 master-0 kubenswrapper[32968]: I0309 16:46:19.388299 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9cq\" (UniqueName: \"kubernetes.io/projected/aec186fc-aead-47fb-a7e1-8c9325897c47-kube-api-access-vj9cq\") pod \"community-operators-zrqjw\" (UID: \"aec186fc-aead-47fb-a7e1-8c9325897c47\") " pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:19.404145 master-0 kubenswrapper[32968]: I0309 16:46:19.404073 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkqg\" (UniqueName: \"kubernetes.io/projected/57036838-9f42-4ea1-a5c9-77f820cc22c9-kube-api-access-czkqg\") pod \"csi-snapshot-controller-7577d6f48-f594m\" (UID: \"57036838-9f42-4ea1-a5c9-77f820cc22c9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-f594m" Mar 09 16:46:19.416799 master-0 kubenswrapper[32968]: I0309 16:46:19.416698 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpfl\" (UniqueName: \"kubernetes.io/projected/631f2bdf-2ed4-4315-98c3-c5a538d0aec3-kube-api-access-shpfl\") pod \"cluster-storage-operator-6fbfc8dc8f-8nlvp\" (UID: \"631f2bdf-2ed4-4315-98c3-c5a538d0aec3\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-8nlvp" Mar 09 16:46:19.446176 master-0 kubenswrapper[32968]: I0309 16:46:19.446102 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnstc\" (UniqueName: \"kubernetes.io/projected/b9fc9e7d-652c-4063-9cdb-358f58cae29a-kube-api-access-xnstc\") pod \"node-exporter-qjk4k\" (UID: \"b9fc9e7d-652c-4063-9cdb-358f58cae29a\") " pod="openshift-monitoring/node-exporter-qjk4k" Mar 09 16:46:19.448746 master-0 kubenswrapper[32968]: I0309 16:46:19.448685 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/696fcca2-df1a-491d-956d-1cfda1ee5e70-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:19.465102 master-0 kubenswrapper[32968]: I0309 16:46:19.465023 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrms4\" (UniqueName: \"kubernetes.io/projected/82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9-kube-api-access-rrms4\") pod \"machine-config-server-7d5bx\" (UID: \"82e99f72-ea84-4c03-90d3-d7b3bcc0f2e9\") " pod="openshift-machine-config-operator/machine-config-server-7d5bx" Mar 09 16:46:19.546101 master-0 kubenswrapper[32968]: I0309 16:46:19.546030 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 09 16:46:19.563692 master-0 kubenswrapper[32968]: I0309 16:46:19.563553 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") pod \"metrics-server-7c4558858-9rclt\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:19.566369 master-0 kubenswrapper[32968]: I0309 16:46:19.566309 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") pod \"route-controller-manager-675f85b8f7-bt9gb\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:19.566556 master-0 kubenswrapper[32968]: I0309 16:46:19.566359 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqdm\" (UniqueName: \"kubernetes.io/projected/af4aa8d4-09e1-4589-b7bf-885617a11337-kube-api-access-whqdm\") pod \"service-ca-84bfdbbb7f-6r6g2\" (UID: \"af4aa8d4-09e1-4589-b7bf-885617a11337\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-6r6g2" Mar 09 16:46:19.568586 master-0 kubenswrapper[32968]: I0309 16:46:19.568546 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhglf\" (UniqueName: \"kubernetes.io/projected/baf704e3-daf2-4934-a04e-d31df8df0c4a-kube-api-access-nhglf\") pod \"machine-config-daemon-94s4v\" (UID: \"baf704e3-daf2-4934-a04e-d31df8df0c4a\") " pod="openshift-machine-config-operator/machine-config-daemon-94s4v" Mar 09 16:46:19.569209 master-0 kubenswrapper[32968]: I0309 16:46:19.569160 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwnd\" (UniqueName: \"kubernetes.io/projected/8972b380-8f87-4b73-8f95-440d34d03884-kube-api-access-8hwnd\") pod \"machine-config-controller-ff46b7bdf-xqpdd\" (UID: \"8972b380-8f87-4b73-8f95-440d34d03884\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-xqpdd" Mar 09 16:46:19.592571 master-0 kubenswrapper[32968]: I0309 16:46:19.592467 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc2t2\" (UniqueName: \"kubernetes.io/projected/217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8-kube-api-access-kc2t2\") pod \"apiserver-67495f79c-bcblv\" (UID: \"217473c4-ef8f-4f4f-bce9-e92d5cc1e5b8\") " pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:19.642588 master-0 kubenswrapper[32968]: I0309 16:46:19.642535 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26xps\" (UniqueName: \"kubernetes.io/projected/e91a0e23-c95b-4290-9c0c-29101febfc8f-kube-api-access-26xps\") pod \"multus-admission-controller-7769569c45-jcsfw\" (UID: \"e91a0e23-c95b-4290-9c0c-29101febfc8f\") " pod="openshift-multus/multus-admission-controller-7769569c45-jcsfw" Mar 09 16:46:19.642818 master-0 kubenswrapper[32968]: I0309 16:46:19.642671 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dzfq\" (UniqueName: \"kubernetes.io/projected/5587e967-124e-4f2a-b7fb-42cb16bfc337-kube-api-access-4dzfq\") pod \"dns-default-sj6x9\" (UID: \"5587e967-124e-4f2a-b7fb-42cb16bfc337\") " pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:19.643247 master-0 kubenswrapper[32968]: I0309 16:46:19.643192 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9fx\" (UniqueName: \"kubernetes.io/projected/8be2517a-6f28-4289-a108-6e3345a1e246-kube-api-access-hh9fx\") pod \"insights-operator-8f89dfddd-5fjz8\" (UID: \"8be2517a-6f28-4289-a108-6e3345a1e246\") " pod="openshift-insights/insights-operator-8f89dfddd-5fjz8" Mar 09 16:46:19.700052 master-0 kubenswrapper[32968]: I0309 16:46:19.699988 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8rh\" (UniqueName: \"kubernetes.io/projected/92bd7735-8e3c-43bb-b543-03e6e6c5142a-kube-api-access-dv8rh\") pod \"openshift-state-metrics-74cc79fd76-j9x6n\" (UID: \"92bd7735-8e3c-43bb-b543-03e6e6c5142a\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-j9x6n" Mar 09 16:46:19.700833 master-0 kubenswrapper[32968]: I0309 16:46:19.700790 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jvl\" (UniqueName: \"kubernetes.io/projected/c72e89f0-37ad-4515-89ba-ba1f52ba61f0-kube-api-access-h8jvl\") pod \"operator-controller-controller-manager-6598bfb6c4-tnbvb\" (UID: \"c72e89f0-37ad-4515-89ba-ba1f52ba61f0\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:19.701822 master-0 kubenswrapper[32968]: I0309 16:46:19.701787 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8hj\" (UniqueName: \"kubernetes.io/projected/0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d-kube-api-access-wn8hj\") pod \"node-resolver-kqtzc\" (UID: \"0b1790eb-a3b2-4cc6-9f0a-f5fb56137c6d\") " pod="openshift-dns/node-resolver-kqtzc" Mar 09 16:46:19.723501 master-0 kubenswrapper[32968]: I0309 16:46:19.723412 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495rn\" (UniqueName: \"kubernetes.io/projected/357570a4-f69b-4970-9b6f-fe06fc4c2f90-kube-api-access-495rn\") pod \"control-plane-machine-set-operator-6686554ddc-cvdzq\" (UID: \"357570a4-f69b-4970-9b6f-fe06fc4c2f90\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-cvdzq" Mar 09 16:46:19.748629 master-0 kubenswrapper[32968]: I0309 16:46:19.748553 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzzg\" (UniqueName: \"kubernetes.io/projected/d6b4992e-50f3-473c-aa83-ed35569ba307-kube-api-access-bhzzg\") pod \"machine-config-operator-fdb5c78b5-db9vp\" (UID: \"d6b4992e-50f3-473c-aa83-ed35569ba307\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-db9vp" Mar 09 16:46:19.865759 master-0 kubenswrapper[32968]: I0309 16:46:19.865569 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:19.866194 master-0 kubenswrapper[32968]: I0309 16:46:19.866172 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:19.877628 master-0 kubenswrapper[32968]: I0309 16:46:19.877553 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:19.879404 master-0 kubenswrapper[32968]: I0309 16:46:19.879365 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sj6x9" Mar 09 16:46:19.978058 master-0 kubenswrapper[32968]: I0309 16:46:19.977972 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") pod \"controller-manager-5c5964c98f-tm4pb\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:20.046045 master-0 kubenswrapper[32968]: E0309 16:46:20.045964 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 09 16:46:20.054718 master-0 kubenswrapper[32968]: I0309 16:46:20.054669 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgj24\" (UniqueName: \"kubernetes.io/projected/3745c679-2ea9-4382-9270-4d3fbbaaf296-kube-api-access-jgj24\") pod \"certified-operators-8gkw8\" (UID: \"3745c679-2ea9-4382-9270-4d3fbbaaf296\") " pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:20.054952 master-0 kubenswrapper[32968]: I0309 16:46:20.054930 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdb8\" (UniqueName: \"kubernetes.io/projected/34c0b60e-da69-452d-858d-0af77f18946d-kube-api-access-vmdb8\") pod \"cluster-samples-operator-664cb58b85-wd5cw\" (UID: \"34c0b60e-da69-452d-858d-0af77f18946d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-wd5cw" Mar 09 16:46:20.056415 master-0 kubenswrapper[32968]: I0309 16:46:20.056392 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mr7t\" (UniqueName: \"kubernetes.io/projected/c76178f6-3f0b-4b7d-ad23-724b83e35120-kube-api-access-2mr7t\") pod \"tuned-fllqb\" (UID: \"c76178f6-3f0b-4b7d-ad23-724b83e35120\") " pod="openshift-cluster-node-tuning-operator/tuned-fllqb" Mar 09 16:46:20.057854 master-0 kubenswrapper[32968]: I0309 16:46:20.057834 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98bk\" (UniqueName: \"kubernetes.io/projected/be856881-2ceb-4803-8330-4a27ad8b1937-kube-api-access-v98bk\") pod \"redhat-marketplace-zcvrg\" (UID: \"be856881-2ceb-4803-8330-4a27ad8b1937\") " pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:20.060184 master-0 kubenswrapper[32968]: I0309 16:46:20.060157 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98llp\" (UniqueName: \"kubernetes.io/projected/6d47955b-b85c-4137-9dea-ff0c20d5ab77-kube-api-access-98llp\") pod \"ovnkube-node-vwgwh\" (UID: \"6d47955b-b85c-4137-9dea-ff0c20d5ab77\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:20.082065 master-0 kubenswrapper[32968]: I0309 16:46:20.081983 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 09 16:46:20.084315 master-0 kubenswrapper[32968]: I0309 16:46:20.084246 32968 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 09 16:46:20.084547 master-0 kubenswrapper[32968]: I0309 16:46:20.084522 32968 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 09 16:46:20.100189 master-0 kubenswrapper[32968]: I0309 16:46:20.100127 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 09 16:46:20.291250 master-0 kubenswrapper[32968]: I0309 16:46:20.288453 32968 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:20.291250 master-0 kubenswrapper[32968]: [-]has-synced failed: reason withheld Mar 09 16:46:20.291250 master-0 kubenswrapper[32968]: [+]process-running ok Mar 09 16:46:20.291250 master-0 kubenswrapper[32968]: healthz check failed Mar 09 16:46:20.291250 master-0 kubenswrapper[32968]: I0309 16:46:20.288530 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:20.536070 master-0 kubenswrapper[32968]: I0309 16:46:20.535958 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:20.570168 master-0 kubenswrapper[32968]: I0309 16:46:20.570010 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:20.575590 master-0 kubenswrapper[32968]: I0309 16:46:20.575553 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 09 16:46:20.585796 master-0 kubenswrapper[32968]: I0309 16:46:20.585740 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:46:20.594465 master-0 kubenswrapper[32968]: I0309 16:46:20.594384 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-hv8xl" Mar 09 16:46:20.659039 master-0 kubenswrapper[32968]: I0309 16:46:20.658959 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=12.658930002 podStartE2EDuration="12.658930002s" podCreationTimestamp="2026-03-09 16:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:46:20.422936517 +0000 UTC m=+6.526259067" watchObservedRunningTime="2026-03-09 16:46:20.658930002 +0000 UTC m=+6.762252542" Mar 09 16:46:20.976762 master-0 kubenswrapper[32968]: I0309 16:46:20.976714 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:21.239509 master-0 kubenswrapper[32968]: I0309 16:46:21.239336 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:21.241685 master-0 kubenswrapper[32968]: I0309 16:46:21.241667 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-xrgml" Mar 09 16:46:21.285738 master-0 kubenswrapper[32968]: I0309 16:46:21.285689 32968 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:21.285738 master-0 kubenswrapper[32968]: [-]has-synced failed: reason withheld Mar 09 16:46:21.285738 master-0 kubenswrapper[32968]: [+]process-running ok Mar 09 16:46:21.285738 master-0 kubenswrapper[32968]: healthz check failed Mar 09 16:46:21.286192 master-0 kubenswrapper[32968]: I0309 16:46:21.286150 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:21.391055 master-0 kubenswrapper[32968]: I0309 16:46:21.390988 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:21.764731 master-0 kubenswrapper[32968]: I0309 16:46:21.764635 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:46:21.780195 master-0 kubenswrapper[32968]: I0309 16:46:21.780132 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-vh6m4" Mar 09 16:46:21.783453 master-0 kubenswrapper[32968]: I0309 16:46:21.782042 32968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:46:21.783453 master-0 kubenswrapper[32968]: I0309 16:46:21.782075 32968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:46:21.784627 master-0 kubenswrapper[32968]: I0309 16:46:21.784570 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-kdqvv" Mar 09 16:46:21.890665 master-0 kubenswrapper[32968]: I0309 16:46:21.890582 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:46:21.894771 master-0 kubenswrapper[32968]: I0309 16:46:21.894704 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-fqwtv" Mar 09 16:46:22.044373 master-0 kubenswrapper[32968]: I0309 16:46:22.044148 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:22.048783 master-0 kubenswrapper[32968]: I0309 16:46:22.048622 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=14.048602755 podStartE2EDuration="14.048602755s" podCreationTimestamp="2026-03-09 16:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:46:22.048601375 +0000 UTC m=+8.151923915" watchObservedRunningTime="2026-03-09 16:46:22.048602755 +0000 UTC m=+8.151925295" Mar 09 16:46:22.285009 master-0 kubenswrapper[32968]: I0309 16:46:22.284940 32968 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:22.285009 master-0 kubenswrapper[32968]: [-]has-synced failed: reason withheld Mar 09 16:46:22.285009 master-0 kubenswrapper[32968]: [+]process-running ok Mar 09 16:46:22.285009 master-0 kubenswrapper[32968]: healthz check failed Mar 09 16:46:22.285673 master-0 kubenswrapper[32968]: I0309 16:46:22.285025 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:22.387411 master-0 kubenswrapper[32968]: I0309 16:46:22.387217 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:22.701768 master-0 kubenswrapper[32968]: I0309 16:46:22.701672 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:22.707538 master-0 kubenswrapper[32968]: I0309 16:46:22.707477 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:22.794999 master-0 kubenswrapper[32968]: I0309 16:46:22.794937 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:23.103271 master-0 kubenswrapper[32968]: I0309 16:46:23.103081 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:23.286442 master-0 kubenswrapper[32968]: I0309 16:46:23.285747 32968 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:23.286442 master-0 kubenswrapper[32968]: [-]has-synced failed: reason withheld Mar 09 16:46:23.286442 master-0 kubenswrapper[32968]: [+]process-running ok Mar 09 16:46:23.286442 master-0 kubenswrapper[32968]: healthz check failed Mar 09 16:46:23.286442 master-0 kubenswrapper[32968]: I0309 16:46:23.285840 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:23.434187 master-0 kubenswrapper[32968]: I0309 16:46:23.434119 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:46:23.438970 master-0 kubenswrapper[32968]: I0309 16:46:23.438892 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-ncskk" Mar 09 16:46:23.954938 master-0 kubenswrapper[32968]: I0309 16:46:23.954870 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:23.984294 master-0 kubenswrapper[32968]: I0309 16:46:23.983593 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:24.273163 master-0 kubenswrapper[32968]: I0309 16:46:24.272929 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:24.283520 master-0 kubenswrapper[32968]: I0309 16:46:24.283457 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:24.285698 master-0 kubenswrapper[32968]: I0309 16:46:24.285595 32968 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:24.285698 master-0 kubenswrapper[32968]: [-]has-synced failed: reason withheld Mar 09 16:46:24.285698 master-0 kubenswrapper[32968]: [+]process-running ok Mar 09 16:46:24.285698 master-0 kubenswrapper[32968]: healthz check failed Mar 09 16:46:24.286091 master-0 kubenswrapper[32968]: I0309 16:46:24.285700 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:24.290044 master-0 kubenswrapper[32968]: I0309 16:46:24.289973 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-dc6bb954d-kxhv7" Mar 09 16:46:24.768601 master-0 kubenswrapper[32968]: I0309 16:46:24.768551 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:24.874313 master-0 kubenswrapper[32968]: I0309 16:46:24.874193 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:24.880311 master-0 kubenswrapper[32968]: I0309 16:46:24.879790 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-67495f79c-bcblv" Mar 09 16:46:24.987484 master-0 kubenswrapper[32968]: I0309 16:46:24.984114 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:24.998463 master-0 kubenswrapper[32968]: I0309 16:46:24.991923 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:25.116507 master-0 kubenswrapper[32968]: I0309 16:46:25.116312 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:25.285208 master-0 kubenswrapper[32968]: I0309 16:46:25.285110 32968 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-rvnwf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 09 16:46:25.285208 master-0 kubenswrapper[32968]: [-]has-synced failed: reason withheld Mar 09 16:46:25.285208 master-0 kubenswrapper[32968]: [+]process-running ok Mar 09 16:46:25.285208 master-0 kubenswrapper[32968]: healthz check failed Mar 09 16:46:25.285734 master-0 kubenswrapper[32968]: I0309 16:46:25.285226 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" podUID="73f1f0ba-f90e-45aa-b1ba-df011a5b9d56" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 09 16:46:25.324297 master-0 kubenswrapper[32968]: I0309 16:46:25.324205 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:25.326909 master-0 kubenswrapper[32968]: I0309 16:46:25.326859 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-tnbvb" Mar 09 16:46:25.453191 master-0 kubenswrapper[32968]: I0309 16:46:25.453008 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:25.821276 master-0 kubenswrapper[32968]: I0309 16:46:25.821105 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:46:25.975380 master-0 kubenswrapper[32968]: I0309 16:46:25.975294 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:26.205756 master-0 kubenswrapper[32968]: I0309 16:46:26.205684 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:26.210516 master-0 kubenswrapper[32968]: I0309 16:46:26.210459 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:46:26.289972 master-0 kubenswrapper[32968]: I0309 16:46:26.288915 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:26.293252 master-0 kubenswrapper[32968]: I0309 16:46:26.293081 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-rvnwf" Mar 09 16:46:26.313951 master-0 kubenswrapper[32968]: I0309 16:46:26.311747 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:26.374786 master-0 kubenswrapper[32968]: I0309 16:46:26.374710 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:46:26.378843 master-0 kubenswrapper[32968]: I0309 16:46:26.378792 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-qtmrd" Mar 09 16:46:26.392408 master-0 kubenswrapper[32968]: I0309 16:46:26.392325 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:46:27.051143 master-0 kubenswrapper[32968]: I0309 16:46:27.051074 32968 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:46:27.051517 master-0 kubenswrapper[32968]: I0309 16:46:27.051324 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://2d59ac76dc4be81acf3ade62baf431dad3208a3f0083ed9e5b09fbc150f0a9be" gracePeriod=30 Mar 09 16:46:27.052731 master-0 kubenswrapper[32968]: E0309 16:46:27.052142 32968 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-scheduler-pod.yaml\": /etc/kubernetes/manifests/kube-scheduler-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 09 16:46:27.053236 master-0 kubenswrapper[32968]: I0309 16:46:27.053143 32968 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 09 16:46:27.053637 master-0 kubenswrapper[32968]: E0309 16:46:27.053576 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerName="installer" Mar 09 16:46:27.053637 master-0 kubenswrapper[32968]: I0309 16:46:27.053618 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerName="installer" Mar 09 16:46:27.053637 master-0 kubenswrapper[32968]: E0309 16:46:27.053640 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053653 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053664 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053673 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053694 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4320d00b-9add-4224-9632-d8422fec5b0b" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053701 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4320d00b-9add-4224-9632-d8422fec5b0b" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053715 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053722 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053731 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053738 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053749 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696fcca2-df1a-491d-956d-1cfda1ee5e70" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053755 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="696fcca2-df1a-491d-956d-1cfda1ee5e70" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053765 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053771 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053783 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053791 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053801 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053810 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: E0309 16:46:27.053832 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerName="installer" Mar 09 16:46:27.053814 master-0 kubenswrapper[32968]: I0309 16:46:27.053842 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: E0309 16:46:27.053858 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8139a33-a597-4038-9bb4-183e72f498b4" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.053868 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8139a33-a597-4038-9bb4-183e72f498b4" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054004 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="797303d2-6d31-42f6-a1a4-c894509fba00" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054047 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054059 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8139a33-a597-4038-9bb4-183e72f498b4" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054073 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8a48b1-d4a9-48fb-912e-2f793a6d8478" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054084 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d95c7ed-e3ea-4383-b083-1df5df078f1c" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054095 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="07aaf428-5040-4e75-9c0d-e092d0b2c2f3" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054107 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="963633a2-3f9d-4b82-9e53-d749fa52bf8e" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054120 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="737facff-692c-4d57-a52b-e5f19b74ffd7" containerName="assisted-installer-controller" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054129 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f44499-c673-4c73-8ee9-dcef8914ce14" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054145 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="696fcca2-df1a-491d-956d-1cfda1ee5e70" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054154 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4320d00b-9add-4224-9632-d8422fec5b0b" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054164 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5298b1-ccde-4c18-8cdb-f415a4842f75" containerName="installer" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: E0309 16:46:27.054313 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054324 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 09 16:46:27.054581 master-0 kubenswrapper[32968]: I0309 16:46:27.054433 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 09 16:46:27.055431 master-0 kubenswrapper[32968]: I0309 16:46:27.055315 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.089341 master-0 kubenswrapper[32968]: I0309 16:46:27.089268 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 09 16:46:27.238475 master-0 kubenswrapper[32968]: I0309 16:46:27.238392 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:27.243522 master-0 kubenswrapper[32968]: I0309 16:46:27.242714 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.243522 master-0 kubenswrapper[32968]: I0309 16:46:27.242792 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.282852 master-0 kubenswrapper[32968]: I0309 16:46:27.282199 32968 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="ec479369-be6e-4c24-a8e2-b59a9b29a66b" Mar 09 16:46:27.330339 master-0 kubenswrapper[32968]: I0309 16:46:27.330150 32968 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:46:27.330660 master-0 kubenswrapper[32968]: I0309 16:46:27.330521 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" containerID="cri-o://3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5" gracePeriod=5 Mar 09 16:46:27.344495 master-0 kubenswrapper[32968]: I0309 16:46:27.344386 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 09 16:46:27.344820 master-0 kubenswrapper[32968]: I0309 16:46:27.344556 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 09 16:46:27.344911 master-0 kubenswrapper[32968]: I0309 16:46:27.344781 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:27.345046 master-0 kubenswrapper[32968]: I0309 16:46:27.344865 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:27.345523 master-0 kubenswrapper[32968]: I0309 16:46:27.345473 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.345674 master-0 kubenswrapper[32968]: I0309 16:46:27.345642 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.345808 master-0 kubenswrapper[32968]: I0309 16:46:27.345782 32968 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:27.345808 master-0 kubenswrapper[32968]: I0309 16:46:27.345805 32968 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:27.345892 master-0 kubenswrapper[32968]: I0309 16:46:27.345847 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.345892 master-0 kubenswrapper[32968]: I0309 16:46:27.345874 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.390633 master-0 kubenswrapper[32968]: I0309 16:46:27.390551 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:27.413228 master-0 kubenswrapper[32968]: W0309 16:46:27.412962 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6a75ab47c06be4e74d05f552da4470.slice/crio-3c1dce5672c5f05f0dff2b35056b08da3f448edc172e1eb15f3b60a90a7d2dc6 WatchSource:0}: Error finding container 3c1dce5672c5f05f0dff2b35056b08da3f448edc172e1eb15f3b60a90a7d2dc6: Status 404 returned error can't find the container with id 3c1dce5672c5f05f0dff2b35056b08da3f448edc172e1eb15f3b60a90a7d2dc6 Mar 09 16:46:27.555120 master-0 kubenswrapper[32968]: I0309 16:46:27.555046 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:27.558743 master-0 kubenswrapper[32968]: I0309 16:46:27.558714 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:46:27.832960 master-0 kubenswrapper[32968]: I0309 16:46:27.832880 32968 generic.go:334] "Generic (PLEG): container finished" podID="99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" containerID="4bd1e152391019fc30761bea1a52c716092ae04ae17eaec109956953b77c5f4d" exitCode=0 Mar 09 16:46:27.833338 master-0 kubenswrapper[32968]: I0309 16:46:27.832971 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e","Type":"ContainerDied","Data":"4bd1e152391019fc30761bea1a52c716092ae04ae17eaec109956953b77c5f4d"} Mar 09 16:46:27.835171 master-0 kubenswrapper[32968]: I0309 16:46:27.835136 32968 generic.go:334] "Generic (PLEG): container finished" podID="aa6a75ab47c06be4e74d05f552da4470" containerID="417059c311595591855834701cee62b90f83ad27c284abb9cd51e1d0cc67771b" exitCode=0 Mar 09 16:46:27.835233 master-0 kubenswrapper[32968]: I0309 16:46:27.835201 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerDied","Data":"417059c311595591855834701cee62b90f83ad27c284abb9cd51e1d0cc67771b"} Mar 09 16:46:27.835286 master-0 kubenswrapper[32968]: I0309 16:46:27.835241 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"3c1dce5672c5f05f0dff2b35056b08da3f448edc172e1eb15f3b60a90a7d2dc6"} Mar 09 16:46:27.843482 master-0 kubenswrapper[32968]: I0309 16:46:27.842805 32968 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="2d59ac76dc4be81acf3ade62baf431dad3208a3f0083ed9e5b09fbc150f0a9be" exitCode=0 Mar 09 16:46:27.843482 master-0 kubenswrapper[32968]: I0309 16:46:27.842898 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80c9b37431554cd92a24ad34b7af18e7291912146f9f37daf8b74df6a91ee6d" Mar 09 16:46:27.843482 master-0 kubenswrapper[32968]: I0309 16:46:27.842967 32968 scope.go:117] "RemoveContainer" containerID="e2631a32e255a52568b9ac43894518418d92bac3336a41a26e162021d7380239" Mar 09 16:46:27.843482 master-0 kubenswrapper[32968]: I0309 16:46:27.842954 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 09 16:46:28.008316 master-0 kubenswrapper[32968]: I0309 16:46:28.007374 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:28.023208 master-0 kubenswrapper[32968]: I0309 16:46:28.023141 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-775b84c99f-6ffjr" Mar 09 16:46:28.125192 master-0 kubenswrapper[32968]: I0309 16:46:28.124996 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 09 16:46:28.125525 master-0 kubenswrapper[32968]: I0309 16:46:28.125477 32968 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 09 16:46:28.217614 master-0 kubenswrapper[32968]: I0309 16:46:28.217542 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:46:28.217614 master-0 kubenswrapper[32968]: I0309 16:46:28.217592 32968 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="ec479369-be6e-4c24-a8e2-b59a9b29a66b" Mar 09 16:46:28.219967 master-0 kubenswrapper[32968]: I0309 16:46:28.219503 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 09 16:46:28.219967 master-0 kubenswrapper[32968]: I0309 16:46:28.219581 32968 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="ec479369-be6e-4c24-a8e2-b59a9b29a66b" Mar 09 16:46:28.861466 master-0 kubenswrapper[32968]: I0309 16:46:28.857621 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"139a6864d51585ca9fc7754ba0f32a444f00c037e76604a2e9f2400165a2a7c1"} Mar 09 16:46:28.861466 master-0 kubenswrapper[32968]: I0309 16:46:28.857688 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"e88ee60e9e22f1236b0fd098b2d97202bb90bfbca60e2a705750ba73764e0c28"} Mar 09 16:46:29.409012 master-0 kubenswrapper[32968]: I0309 16:46:29.408818 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:29.593277 master-0 kubenswrapper[32968]: I0309 16:46:29.593203 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") pod \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " Mar 09 16:46:29.593656 master-0 kubenswrapper[32968]: I0309 16:46:29.593488 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") pod \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " Mar 09 16:46:29.593656 master-0 kubenswrapper[32968]: I0309 16:46:29.593614 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") pod \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\" (UID: \"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e\") " Mar 09 16:46:29.593925 master-0 kubenswrapper[32968]: I0309 16:46:29.593870 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" (UID: "99c45f9c-e4ce-48c5-b137-e5b6f6464a1e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:29.594233 master-0 kubenswrapper[32968]: I0309 16:46:29.594168 32968 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:29.594362 master-0 kubenswrapper[32968]: I0309 16:46:29.594342 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock" (OuterVolumeSpecName: "var-lock") pod "99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" (UID: "99c45f9c-e4ce-48c5-b137-e5b6f6464a1e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:29.610231 master-0 kubenswrapper[32968]: I0309 16:46:29.609232 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" (UID: "99c45f9c-e4ce-48c5-b137-e5b6f6464a1e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:46:29.695966 master-0 kubenswrapper[32968]: I0309 16:46:29.695923 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:29.696177 master-0 kubenswrapper[32968]: I0309 16:46:29.696164 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99c45f9c-e4ce-48c5-b137-e5b6f6464a1e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:29.741082 master-0 kubenswrapper[32968]: I0309 16:46:29.741027 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:29.741696 master-0 kubenswrapper[32968]: I0309 16:46:29.741677 32968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:46:29.741787 master-0 kubenswrapper[32968]: I0309 16:46:29.741776 32968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:46:29.833051 master-0 kubenswrapper[32968]: I0309 16:46:29.832988 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:29.904677 master-0 kubenswrapper[32968]: I0309 16:46:29.904381 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 09 16:46:29.904677 master-0 kubenswrapper[32968]: I0309 16:46:29.904520 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"99c45f9c-e4ce-48c5-b137-e5b6f6464a1e","Type":"ContainerDied","Data":"ca6c68bebab4667be94ed8d4950c8443a1dc101549e30dea2fc49d8db92f1da8"} Mar 09 16:46:29.904677 master-0 kubenswrapper[32968]: I0309 16:46:29.904583 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca6c68bebab4667be94ed8d4950c8443a1dc101549e30dea2fc49d8db92f1da8" Mar 09 16:46:29.916001 master-0 kubenswrapper[32968]: I0309 16:46:29.915539 32968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:46:29.916001 master-0 kubenswrapper[32968]: I0309 16:46:29.915551 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"1b6350c02266b731ad4a0409e6bff5c8dce1bf5e015401a442df6779a1727b4f"} Mar 09 16:46:29.916951 master-0 kubenswrapper[32968]: I0309 16:46:29.916432 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:46:29.982795 master-0 kubenswrapper[32968]: I0309 16:46:29.982690 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.9826717069999997 podStartE2EDuration="2.982671707s" podCreationTimestamp="2026-03-09 16:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:46:29.979142983 +0000 UTC m=+16.082465543" watchObservedRunningTime="2026-03-09 16:46:29.982671707 +0000 UTC m=+16.085994247" Mar 09 16:46:31.022250 master-0 kubenswrapper[32968]: I0309 16:46:31.022207 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:31.072344 master-0 kubenswrapper[32968]: I0309 16:46:31.071603 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcvrg" Mar 09 16:46:32.926326 master-0 kubenswrapper[32968]: I0309 16:46:32.926262 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 09 16:46:32.926884 master-0 kubenswrapper[32968]: I0309 16:46:32.926354 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:32.946762 master-0 kubenswrapper[32968]: I0309 16:46:32.946670 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_3a18cac8a90d6913a6a0391d805cddc9/startup-monitor/0.log" Mar 09 16:46:32.946762 master-0 kubenswrapper[32968]: I0309 16:46:32.946744 32968 generic.go:334] "Generic (PLEG): container finished" podID="3a18cac8a90d6913a6a0391d805cddc9" containerID="3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5" exitCode=137 Mar 09 16:46:32.947060 master-0 kubenswrapper[32968]: I0309 16:46:32.946802 32968 scope.go:117] "RemoveContainer" containerID="3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5" Mar 09 16:46:32.947060 master-0 kubenswrapper[32968]: I0309 16:46:32.946921 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:46:32.970698 master-0 kubenswrapper[32968]: I0309 16:46:32.968714 32968 scope.go:117] "RemoveContainer" containerID="3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5" Mar 09 16:46:32.970698 master-0 kubenswrapper[32968]: E0309 16:46:32.969569 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5\": container with ID starting with 3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5 not found: ID does not exist" containerID="3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5" Mar 09 16:46:32.970698 master-0 kubenswrapper[32968]: I0309 16:46:32.969647 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5"} err="failed to get container status \"3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5\": rpc error: code = NotFound desc = could not find container \"3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5\": container with ID starting with 3e32d7d90a443b35fb608d21799778b1dce882be4ea88ed4328757284e2543e5 not found: ID does not exist" Mar 09 16:46:33.047787 master-0 kubenswrapper[32968]: I0309 16:46:33.047703 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 09 16:46:33.047787 master-0 kubenswrapper[32968]: I0309 16:46:33.047805 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 09 16:46:33.048169 master-0 kubenswrapper[32968]: I0309 16:46:33.047877 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 09 16:46:33.048169 master-0 kubenswrapper[32968]: I0309 16:46:33.047907 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 09 16:46:33.048169 master-0 kubenswrapper[32968]: I0309 16:46:33.047953 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") pod \"3a18cac8a90d6913a6a0391d805cddc9\" (UID: \"3a18cac8a90d6913a6a0391d805cddc9\") " Mar 09 16:46:33.048169 master-0 kubenswrapper[32968]: I0309 16:46:33.048124 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:33.048309 master-0 kubenswrapper[32968]: I0309 16:46:33.048203 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests" (OuterVolumeSpecName: "manifests") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:33.048309 master-0 kubenswrapper[32968]: I0309 16:46:33.048219 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:33.048309 master-0 kubenswrapper[32968]: I0309 16:46:33.048235 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log" (OuterVolumeSpecName: "var-log") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:33.048556 master-0 kubenswrapper[32968]: I0309 16:46:33.048509 32968 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:33.048556 master-0 kubenswrapper[32968]: I0309 16:46:33.048548 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:33.048645 master-0 kubenswrapper[32968]: I0309 16:46:33.048558 32968 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:33.048645 master-0 kubenswrapper[32968]: I0309 16:46:33.048571 32968 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:33.055233 master-0 kubenswrapper[32968]: I0309 16:46:33.055117 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "3a18cac8a90d6913a6a0391d805cddc9" (UID: "3a18cac8a90d6913a6a0391d805cddc9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:46:33.149334 master-0 kubenswrapper[32968]: I0309 16:46:33.149203 32968 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a18cac8a90d6913a6a0391d805cddc9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:46:33.166102 master-0 kubenswrapper[32968]: I0309 16:46:33.166040 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:33.211950 master-0 kubenswrapper[32968]: I0309 16:46:33.211885 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8gkw8" Mar 09 16:46:33.305482 master-0 kubenswrapper[32968]: I0309 16:46:33.305227 32968 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="4cbc9457-d0b7-4305-94a2-3f2dc916d193" Mar 09 16:46:33.997314 master-0 kubenswrapper[32968]: I0309 16:46:33.997236 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:34.053995 master-0 kubenswrapper[32968]: I0309 16:46:34.053926 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49bwx" Mar 09 16:46:34.093898 master-0 kubenswrapper[32968]: I0309 16:46:34.093836 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a18cac8a90d6913a6a0391d805cddc9" path="/var/lib/kubelet/pods/3a18cac8a90d6913a6a0391d805cddc9/volumes" Mar 09 16:46:34.094564 master-0 kubenswrapper[32968]: I0309 16:46:34.094544 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 09 16:46:34.140902 master-0 kubenswrapper[32968]: I0309 16:46:34.140802 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:46:34.140902 master-0 kubenswrapper[32968]: I0309 16:46:34.140867 32968 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="4cbc9457-d0b7-4305-94a2-3f2dc916d193" Mar 09 16:46:34.143051 master-0 kubenswrapper[32968]: I0309 16:46:34.142987 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:46:34.143051 master-0 kubenswrapper[32968]: I0309 16:46:34.143042 32968 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="4cbc9457-d0b7-4305-94a2-3f2dc916d193" Mar 09 16:46:34.334588 master-0 kubenswrapper[32968]: I0309 16:46:34.334396 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:34.413595 master-0 kubenswrapper[32968]: I0309 16:46:34.413534 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrqjw" Mar 09 16:46:34.428354 master-0 kubenswrapper[32968]: I0309 16:46:34.428278 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-6rmfv"] Mar 09 16:46:34.428719 master-0 kubenswrapper[32968]: E0309 16:46:34.428700 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" containerName="installer" Mar 09 16:46:34.428719 master-0 kubenswrapper[32968]: I0309 16:46:34.428718 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" containerName="installer" Mar 09 16:46:34.428818 master-0 kubenswrapper[32968]: E0309 16:46:34.428758 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 09 16:46:34.428818 master-0 kubenswrapper[32968]: I0309 16:46:34.428768 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 09 16:46:34.429048 master-0 kubenswrapper[32968]: I0309 16:46:34.429022 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c45f9c-e4ce-48c5-b137-e5b6f6464a1e" containerName="installer" Mar 09 16:46:34.429048 master-0 kubenswrapper[32968]: I0309 16:46:34.429045 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a18cac8a90d6913a6a0391d805cddc9" containerName="startup-monitor" Mar 09 16:46:34.429728 master-0 kubenswrapper[32968]: I0309 16:46:34.429695 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.449705 master-0 kubenswrapper[32968]: I0309 16:46:34.449638 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-jtvw2" Mar 09 16:46:34.453627 master-0 kubenswrapper[32968]: I0309 16:46:34.453579 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 09 16:46:34.453918 master-0 kubenswrapper[32968]: I0309 16:46:34.453896 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 09 16:46:34.462464 master-0 kubenswrapper[32968]: I0309 16:46:34.461084 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 09 16:46:34.462464 master-0 kubenswrapper[32968]: I0309 16:46:34.461389 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 09 16:46:34.482558 master-0 kubenswrapper[32968]: I0309 16:46:34.480236 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 09 16:46:34.497467 master-0 kubenswrapper[32968]: I0309 16:46:34.494710 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-6rmfv"] Mar 09 16:46:34.584209 master-0 kubenswrapper[32968]: I0309 16:46:34.583897 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-trusted-ca\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.584209 master-0 kubenswrapper[32968]: I0309 16:46:34.583978 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-serving-cert\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.584209 master-0 kubenswrapper[32968]: I0309 16:46:34.584030 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-config\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.584209 master-0 kubenswrapper[32968]: I0309 16:46:34.584091 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wlp2\" (UniqueName: \"kubernetes.io/projected/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-kube-api-access-4wlp2\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.686735 master-0 kubenswrapper[32968]: I0309 16:46:34.685119 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-trusted-ca\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.686735 master-0 kubenswrapper[32968]: I0309 16:46:34.685198 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-serving-cert\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.686735 master-0 kubenswrapper[32968]: I0309 16:46:34.685236 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-config\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.686735 master-0 kubenswrapper[32968]: I0309 16:46:34.685266 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wlp2\" (UniqueName: \"kubernetes.io/projected/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-kube-api-access-4wlp2\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.687416 master-0 kubenswrapper[32968]: I0309 16:46:34.686744 32968 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 09 16:46:34.688515 master-0 kubenswrapper[32968]: I0309 16:46:34.688484 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-config\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.688752 master-0 kubenswrapper[32968]: I0309 16:46:34.688710 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-trusted-ca\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.689936 master-0 kubenswrapper[32968]: I0309 16:46:34.689912 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-serving-cert\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.703391 master-0 kubenswrapper[32968]: I0309 16:46:34.703326 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wlp2\" (UniqueName: \"kubernetes.io/projected/9f8c4375-1f1d-42f5-bcdb-99e1c6699faf-kube-api-access-4wlp2\") pod \"console-operator-6c7fb6b958-6rmfv\" (UID: \"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf\") " pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:34.744246 master-0 kubenswrapper[32968]: I0309 16:46:34.744128 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:35.277338 master-0 kubenswrapper[32968]: I0309 16:46:35.277169 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-6rmfv"] Mar 09 16:46:35.285625 master-0 kubenswrapper[32968]: W0309 16:46:35.285557 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f8c4375_1f1d_42f5_bcdb_99e1c6699faf.slice/crio-6e60d0329884392d9259082a09d10eec955f2b9c23df996f579a94376d615568 WatchSource:0}: Error finding container 6e60d0329884392d9259082a09d10eec955f2b9c23df996f579a94376d615568: Status 404 returned error can't find the container with id 6e60d0329884392d9259082a09d10eec955f2b9c23df996f579a94376d615568 Mar 09 16:46:35.288760 master-0 kubenswrapper[32968]: I0309 16:46:35.288714 32968 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 16:46:36.004872 master-0 kubenswrapper[32968]: I0309 16:46:36.004798 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" event={"ID":"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf","Type":"ContainerStarted","Data":"6e60d0329884392d9259082a09d10eec955f2b9c23df996f579a94376d615568"} Mar 09 16:46:38.648347 master-0 kubenswrapper[32968]: I0309 16:46:38.648185 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:39.037258 master-0 kubenswrapper[32968]: I0309 16:46:39.037202 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" event={"ID":"9f8c4375-1f1d-42f5-bcdb-99e1c6699faf","Type":"ContainerStarted","Data":"b7e35c7304c1ca6f65f1ffc3010368e27b9866709c3ebfba65a4e07a2e93628e"} Mar 09 16:46:39.037572 master-0 kubenswrapper[32968]: I0309 16:46:39.037460 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:39.048307 master-0 kubenswrapper[32968]: I0309 16:46:39.048246 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" Mar 09 16:46:39.074760 master-0 kubenswrapper[32968]: I0309 16:46:39.074664 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-6rmfv" podStartSLOduration=1.9676752419999999 podStartE2EDuration="5.074637087s" podCreationTimestamp="2026-03-09 16:46:34 +0000 UTC" firstStartedPulling="2026-03-09 16:46:35.288555634 +0000 UTC m=+21.391878174" lastFinishedPulling="2026-03-09 16:46:38.395517479 +0000 UTC m=+24.498840019" observedRunningTime="2026-03-09 16:46:39.069193382 +0000 UTC m=+25.172515922" watchObservedRunningTime="2026-03-09 16:46:39.074637087 +0000 UTC m=+25.177959627" Mar 09 16:46:39.089784 master-0 kubenswrapper[32968]: I0309 16:46:39.089724 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-gh9dp"] Mar 09 16:46:39.090645 master-0 kubenswrapper[32968]: I0309 16:46:39.090619 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:46:39.094509 master-0 kubenswrapper[32968]: I0309 16:46:39.094454 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 09 16:46:39.094705 master-0 kubenswrapper[32968]: I0309 16:46:39.094679 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-ts66c" Mar 09 16:46:39.094854 master-0 kubenswrapper[32968]: I0309 16:46:39.094832 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 09 16:46:39.117830 master-0 kubenswrapper[32968]: I0309 16:46:39.117766 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-gh9dp"] Mar 09 16:46:39.184397 master-0 kubenswrapper[32968]: I0309 16:46:39.184335 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6b9595f755-mznkc"] Mar 09 16:46:39.185381 master-0 kubenswrapper[32968]: I0309 16:46:39.185345 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:39.188087 master-0 kubenswrapper[32968]: I0309 16:46:39.187918 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-dzv2s" Mar 09 16:46:39.188310 master-0 kubenswrapper[32968]: I0309 16:46:39.188283 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 09 16:46:39.200133 master-0 kubenswrapper[32968]: I0309 16:46:39.200057 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6b9595f755-mznkc"] Mar 09 16:46:39.274470 master-0 kubenswrapper[32968]: I0309 16:46:39.273703 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1292fc7e-215b-4531-bd94-109b7e299733-monitoring-plugin-cert\") pod \"monitoring-plugin-6b9595f755-mznkc\" (UID: \"1292fc7e-215b-4531-bd94-109b7e299733\") " pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:39.274470 master-0 kubenswrapper[32968]: I0309 16:46:39.273788 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt2t\" (UniqueName: \"kubernetes.io/projected/4ec214b8-5e2d-48a6-bed4-7859b5c423e1-kube-api-access-9lt2t\") pod \"downloads-84f57b9877-gh9dp\" (UID: \"4ec214b8-5e2d-48a6-bed4-7859b5c423e1\") " pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:46:39.375697 master-0 kubenswrapper[32968]: I0309 16:46:39.375505 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1292fc7e-215b-4531-bd94-109b7e299733-monitoring-plugin-cert\") pod \"monitoring-plugin-6b9595f755-mznkc\" (UID: \"1292fc7e-215b-4531-bd94-109b7e299733\") " pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:39.375697 master-0 kubenswrapper[32968]: I0309 16:46:39.375685 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt2t\" (UniqueName: \"kubernetes.io/projected/4ec214b8-5e2d-48a6-bed4-7859b5c423e1-kube-api-access-9lt2t\") pod \"downloads-84f57b9877-gh9dp\" (UID: \"4ec214b8-5e2d-48a6-bed4-7859b5c423e1\") " pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:46:39.379554 master-0 kubenswrapper[32968]: I0309 16:46:39.379488 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/1292fc7e-215b-4531-bd94-109b7e299733-monitoring-plugin-cert\") pod \"monitoring-plugin-6b9595f755-mznkc\" (UID: \"1292fc7e-215b-4531-bd94-109b7e299733\") " pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:39.394677 master-0 kubenswrapper[32968]: I0309 16:46:39.394612 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt2t\" (UniqueName: \"kubernetes.io/projected/4ec214b8-5e2d-48a6-bed4-7859b5c423e1-kube-api-access-9lt2t\") pod \"downloads-84f57b9877-gh9dp\" (UID: \"4ec214b8-5e2d-48a6-bed4-7859b5c423e1\") " pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:46:39.410408 master-0 kubenswrapper[32968]: I0309 16:46:39.410337 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:46:39.520279 master-0 kubenswrapper[32968]: I0309 16:46:39.520206 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:39.860251 master-0 kubenswrapper[32968]: I0309 16:46:39.859222 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-gh9dp"] Mar 09 16:46:39.970201 master-0 kubenswrapper[32968]: I0309 16:46:39.970123 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6b9595f755-mznkc"] Mar 09 16:46:39.979540 master-0 kubenswrapper[32968]: W0309 16:46:39.979462 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1292fc7e_215b_4531_bd94_109b7e299733.slice/crio-a14d97ee62bf04ee8ceb48c8dc896e0ab59b904e7ae55bccef21dcdb0458e954 WatchSource:0}: Error finding container a14d97ee62bf04ee8ceb48c8dc896e0ab59b904e7ae55bccef21dcdb0458e954: Status 404 returned error can't find the container with id a14d97ee62bf04ee8ceb48c8dc896e0ab59b904e7ae55bccef21dcdb0458e954 Mar 09 16:46:40.044604 master-0 kubenswrapper[32968]: I0309 16:46:40.044526 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-gh9dp" event={"ID":"4ec214b8-5e2d-48a6-bed4-7859b5c423e1","Type":"ContainerStarted","Data":"596b592bd27a9bfa4a630182bc09c24c808fc981e09803c1fa07e0bbcee37e0d"} Mar 09 16:46:40.045807 master-0 kubenswrapper[32968]: I0309 16:46:40.045741 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" event={"ID":"1292fc7e-215b-4531-bd94-109b7e299733","Type":"ContainerStarted","Data":"a14d97ee62bf04ee8ceb48c8dc896e0ab59b904e7ae55bccef21dcdb0458e954"} Mar 09 16:46:42.061386 master-0 kubenswrapper[32968]: I0309 16:46:42.061232 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" event={"ID":"1292fc7e-215b-4531-bd94-109b7e299733","Type":"ContainerStarted","Data":"b80fad0045c54686c797142839567ee872e9be134008e276e649ab889138eeca"} Mar 09 16:46:42.062097 master-0 kubenswrapper[32968]: I0309 16:46:42.061531 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:42.069097 master-0 kubenswrapper[32968]: I0309 16:46:42.069056 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" Mar 09 16:46:42.109336 master-0 kubenswrapper[32968]: I0309 16:46:42.109238 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6b9595f755-mznkc" podStartSLOduration=1.387694901 podStartE2EDuration="3.109201557s" podCreationTimestamp="2026-03-09 16:46:39 +0000 UTC" firstStartedPulling="2026-03-09 16:46:39.981854131 +0000 UTC m=+26.085176661" lastFinishedPulling="2026-03-09 16:46:41.703360777 +0000 UTC m=+27.806683317" observedRunningTime="2026-03-09 16:46:42.085360064 +0000 UTC m=+28.188682604" watchObservedRunningTime="2026-03-09 16:46:42.109201557 +0000 UTC m=+28.212524097" Mar 09 16:46:45.984305 master-0 kubenswrapper[32968]: I0309 16:46:45.984228 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:45.991319 master-0 kubenswrapper[32968]: I0309 16:46:45.990401 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:46:47.340288 master-0 kubenswrapper[32968]: I0309 16:46:47.340214 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:47.341116 master-0 kubenswrapper[32968]: I0309 16:46:47.340443 32968 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 09 16:46:47.376413 master-0 kubenswrapper[32968]: I0309 16:46:47.375755 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vwgwh" Mar 09 16:46:49.169174 master-0 kubenswrapper[32968]: I0309 16:46:49.169067 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-z94mr"] Mar 09 16:46:49.170720 master-0 kubenswrapper[32968]: I0309 16:46:49.170520 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.182701 master-0 kubenswrapper[32968]: I0309 16:46:49.178056 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 09 16:46:49.182701 master-0 kubenswrapper[32968]: I0309 16:46:49.178380 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-vmh5g" Mar 09 16:46:49.268330 master-0 kubenswrapper[32968]: I0309 16:46:49.268257 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c38c26-c034-4ebb-976e-4b6a2e287275-host\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.268330 master-0 kubenswrapper[32968]: I0309 16:46:49.268316 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9c38c26-c034-4ebb-976e-4b6a2e287275-serviceca\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.268738 master-0 kubenswrapper[32968]: I0309 16:46:49.268360 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzb2x\" (UniqueName: \"kubernetes.io/projected/f9c38c26-c034-4ebb-976e-4b6a2e287275-kube-api-access-hzb2x\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.370340 master-0 kubenswrapper[32968]: I0309 16:46:49.370250 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c38c26-c034-4ebb-976e-4b6a2e287275-host\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.370340 master-0 kubenswrapper[32968]: I0309 16:46:49.370338 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9c38c26-c034-4ebb-976e-4b6a2e287275-serviceca\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.370779 master-0 kubenswrapper[32968]: I0309 16:46:49.370386 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzb2x\" (UniqueName: \"kubernetes.io/projected/f9c38c26-c034-4ebb-976e-4b6a2e287275-kube-api-access-hzb2x\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.370779 master-0 kubenswrapper[32968]: I0309 16:46:49.370653 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c38c26-c034-4ebb-976e-4b6a2e287275-host\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.371338 master-0 kubenswrapper[32968]: I0309 16:46:49.371309 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9c38c26-c034-4ebb-976e-4b6a2e287275-serviceca\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.390366 master-0 kubenswrapper[32968]: I0309 16:46:49.390281 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzb2x\" (UniqueName: \"kubernetes.io/projected/f9c38c26-c034-4ebb-976e-4b6a2e287275-kube-api-access-hzb2x\") pod \"node-ca-z94mr\" (UID: \"f9c38c26-c034-4ebb-976e-4b6a2e287275\") " pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:49.511258 master-0 kubenswrapper[32968]: I0309 16:46:49.511175 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-z94mr" Mar 09 16:46:50.134400 master-0 kubenswrapper[32968]: I0309 16:46:50.134334 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z94mr" event={"ID":"f9c38c26-c034-4ebb-976e-4b6a2e287275","Type":"ContainerStarted","Data":"342dd3a564ff1681671381272fe688112c3a53df34a7d150385145f283e87c62"} Mar 09 16:46:53.112926 master-0 kubenswrapper[32968]: I0309 16:46:53.112512 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs"] Mar 09 16:46:53.113719 master-0 kubenswrapper[32968]: I0309 16:46:53.113646 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.124083 master-0 kubenswrapper[32968]: I0309 16:46:53.124023 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 09 16:46:53.124615 master-0 kubenswrapper[32968]: I0309 16:46:53.124587 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 09 16:46:53.128710 master-0 kubenswrapper[32968]: I0309 16:46:53.128599 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs"] Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.136743 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.137268 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.138471 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.138953 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.139177 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.139370 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.139495 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.139573 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 09 16:46:53.139963 master-0 kubenswrapper[32968]: I0309 16:46:53.139696 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 09 16:46:53.160087 master-0 kubenswrapper[32968]: I0309 16:46:53.159984 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 09 16:46:53.161308 master-0 kubenswrapper[32968]: I0309 16:46:53.160909 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-dzbl7" Mar 09 16:46:53.181238 master-0 kubenswrapper[32968]: I0309 16:46:53.175935 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 09 16:46:53.193477 master-0 kubenswrapper[32968]: I0309 16:46:53.193390 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-z94mr" event={"ID":"f9c38c26-c034-4ebb-976e-4b6a2e287275","Type":"ContainerStarted","Data":"d209c544cc7161a345d41565387c900a0f259a96e5d3d02630df585b99d23bc9"} Mar 09 16:46:53.213927 master-0 kubenswrapper[32968]: I0309 16:46:53.213824 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-z94mr" podStartSLOduration=1.411636434 podStartE2EDuration="4.213803835s" podCreationTimestamp="2026-03-09 16:46:49 +0000 UTC" firstStartedPulling="2026-03-09 16:46:49.559858655 +0000 UTC m=+35.663181195" lastFinishedPulling="2026-03-09 16:46:52.362026056 +0000 UTC m=+38.465348596" observedRunningTime="2026-03-09 16:46:53.2128465 +0000 UTC m=+39.316169060" watchObservedRunningTime="2026-03-09 16:46:53.213803835 +0000 UTC m=+39.317126375" Mar 09 16:46:53.283978 master-0 kubenswrapper[32968]: I0309 16:46:53.283873 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2906d129-32ed-4de4-a463-8f62c576f742-audit-dir\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.284872 master-0 kubenswrapper[32968]: I0309 16:46:53.284798 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285192 master-0 kubenswrapper[32968]: I0309 16:46:53.285158 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-audit-policies\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285309 master-0 kubenswrapper[32968]: I0309 16:46:53.285289 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285576 master-0 kubenswrapper[32968]: I0309 16:46:53.285512 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-service-ca\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285655 master-0 kubenswrapper[32968]: I0309 16:46:53.285606 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-error\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285655 master-0 kubenswrapper[32968]: I0309 16:46:53.285635 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28g6j\" (UniqueName: \"kubernetes.io/projected/2906d129-32ed-4de4-a463-8f62c576f742-kube-api-access-28g6j\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285754 master-0 kubenswrapper[32968]: I0309 16:46:53.285699 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.285866 master-0 kubenswrapper[32968]: I0309 16:46:53.285801 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-login\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.286093 master-0 kubenswrapper[32968]: I0309 16:46:53.285983 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.286164 master-0 kubenswrapper[32968]: I0309 16:46:53.286102 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.286164 master-0 kubenswrapper[32968]: I0309 16:46:53.286153 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-router-certs\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.286274 master-0 kubenswrapper[32968]: I0309 16:46:53.286176 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.386934 master-0 kubenswrapper[32968]: I0309 16:46:53.386822 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.387178 master-0 kubenswrapper[32968]: I0309 16:46:53.387162 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-login\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.387397 master-0 kubenswrapper[32968]: I0309 16:46:53.387377 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.388487 master-0 kubenswrapper[32968]: I0309 16:46:53.388472 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.388973 master-0 kubenswrapper[32968]: I0309 16:46:53.388957 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-router-certs\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389079 master-0 kubenswrapper[32968]: I0309 16:46:53.389065 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389214 master-0 kubenswrapper[32968]: I0309 16:46:53.389197 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2906d129-32ed-4de4-a463-8f62c576f742-audit-dir\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389356 master-0 kubenswrapper[32968]: I0309 16:46:53.389344 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389460 master-0 kubenswrapper[32968]: I0309 16:46:53.389446 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-audit-policies\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389547 master-0 kubenswrapper[32968]: I0309 16:46:53.389533 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389622 master-0 kubenswrapper[32968]: I0309 16:46:53.389609 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-service-ca\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389696 master-0 kubenswrapper[32968]: I0309 16:46:53.389683 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-error\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389830 master-0 kubenswrapper[32968]: I0309 16:46:53.389535 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2906d129-32ed-4de4-a463-8f62c576f742-audit-dir\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.389911 master-0 kubenswrapper[32968]: E0309 16:46:53.389859 32968 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:53.390020 master-0 kubenswrapper[32968]: I0309 16:46:53.388413 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.390020 master-0 kubenswrapper[32968]: E0309 16:46:53.389864 32968 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 09 16:46:53.390093 master-0 kubenswrapper[32968]: I0309 16:46:53.390021 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28g6j\" (UniqueName: \"kubernetes.io/projected/2906d129-32ed-4de4-a463-8f62c576f742-kube-api-access-28g6j\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.390243 master-0 kubenswrapper[32968]: E0309 16:46:53.389984 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig podName:2906d129-32ed-4de4-a463-8f62c576f742 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:53.889955369 +0000 UTC m=+39.993278089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig") pod "oauth-openshift-65d7d5bfd8-ks5bs" (UID: "2906d129-32ed-4de4-a463-8f62c576f742") : configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:53.390299 master-0 kubenswrapper[32968]: I0309 16:46:53.390249 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-audit-policies\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.390359 master-0 kubenswrapper[32968]: E0309 16:46:53.390328 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session podName:2906d129-32ed-4de4-a463-8f62c576f742 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:53.890249817 +0000 UTC m=+39.993572497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session") pod "oauth-openshift-65d7d5bfd8-ks5bs" (UID: "2906d129-32ed-4de4-a463-8f62c576f742") : secret "v4-0-config-system-session" not found Mar 09 16:46:53.390536 master-0 kubenswrapper[32968]: I0309 16:46:53.390506 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-service-ca\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.392091 master-0 kubenswrapper[32968]: I0309 16:46:53.392047 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-login\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.394056 master-0 kubenswrapper[32968]: I0309 16:46:53.394000 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.394567 master-0 kubenswrapper[32968]: I0309 16:46:53.394526 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.394831 master-0 kubenswrapper[32968]: I0309 16:46:53.394764 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-router-certs\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.396829 master-0 kubenswrapper[32968]: I0309 16:46:53.396782 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.397845 master-0 kubenswrapper[32968]: I0309 16:46:53.397800 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-error\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.415832 master-0 kubenswrapper[32968]: I0309 16:46:53.415721 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28g6j\" (UniqueName: \"kubernetes.io/projected/2906d129-32ed-4de4-a463-8f62c576f742-kube-api-access-28g6j\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.898067 master-0 kubenswrapper[32968]: I0309 16:46:53.897965 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.898695 master-0 kubenswrapper[32968]: I0309 16:46:53.898209 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:53.898695 master-0 kubenswrapper[32968]: E0309 16:46:53.898370 32968 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:53.898695 master-0 kubenswrapper[32968]: E0309 16:46:53.898493 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig podName:2906d129-32ed-4de4-a463-8f62c576f742 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:54.898469231 +0000 UTC m=+41.001791771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig") pod "oauth-openshift-65d7d5bfd8-ks5bs" (UID: "2906d129-32ed-4de4-a463-8f62c576f742") : configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:53.907266 master-0 kubenswrapper[32968]: I0309 16:46:53.907222 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:54.915548 master-0 kubenswrapper[32968]: I0309 16:46:54.915464 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:54.916271 master-0 kubenswrapper[32968]: E0309 16:46:54.915762 32968 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:54.916271 master-0 kubenswrapper[32968]: E0309 16:46:54.915894 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig podName:2906d129-32ed-4de4-a463-8f62c576f742 nodeName:}" failed. No retries permitted until 2026-03-09 16:46:56.915859764 +0000 UTC m=+43.019182304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig") pod "oauth-openshift-65d7d5bfd8-ks5bs" (UID: "2906d129-32ed-4de4-a463-8f62c576f742") : configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:56.952500 master-0 kubenswrapper[32968]: I0309 16:46:56.951615 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:46:56.952500 master-0 kubenswrapper[32968]: E0309 16:46:56.951825 32968 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:56.952500 master-0 kubenswrapper[32968]: E0309 16:46:56.951971 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig podName:2906d129-32ed-4de4-a463-8f62c576f742 nodeName:}" failed. No retries permitted until 2026-03-09 16:47:00.951937854 +0000 UTC m=+47.055260394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig") pod "oauth-openshift-65d7d5bfd8-ks5bs" (UID: "2906d129-32ed-4de4-a463-8f62c576f742") : configmap "v4-0-config-system-cliconfig" not found Mar 09 16:46:57.845448 master-0 kubenswrapper[32968]: I0309 16:46:57.843188 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-869ff9c57d-w6zhf"] Mar 09 16:46:57.847583 master-0 kubenswrapper[32968]: I0309 16:46:57.846919 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:57.850933 master-0 kubenswrapper[32968]: I0309 16:46:57.850876 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 09 16:46:57.851276 master-0 kubenswrapper[32968]: I0309 16:46:57.851244 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 09 16:46:57.851461 master-0 kubenswrapper[32968]: I0309 16:46:57.851438 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 09 16:46:57.854940 master-0 kubenswrapper[32968]: I0309 16:46:57.851702 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 09 16:46:57.854940 master-0 kubenswrapper[32968]: I0309 16:46:57.851891 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-rpbqt" Mar 09 16:46:57.854940 master-0 kubenswrapper[32968]: I0309 16:46:57.852122 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 09 16:46:57.871726 master-0 kubenswrapper[32968]: I0309 16:46:57.871506 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-869ff9c57d-w6zhf"] Mar 09 16:46:57.972445 master-0 kubenswrapper[32968]: I0309 16:46:57.972262 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-serving-cert\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:57.972445 master-0 kubenswrapper[32968]: I0309 16:46:57.972393 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-console-config\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:57.972445 master-0 kubenswrapper[32968]: I0309 16:46:57.972435 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8mvv\" (UniqueName: \"kubernetes.io/projected/994004c1-0e66-4998-a952-52e41b4637f9-kube-api-access-b8mvv\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:57.973063 master-0 kubenswrapper[32968]: I0309 16:46:57.972475 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-oauth-config\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:57.973063 master-0 kubenswrapper[32968]: I0309 16:46:57.972503 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-service-ca\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:57.973063 master-0 kubenswrapper[32968]: I0309 16:46:57.972526 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-oauth-serving-cert\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.073758 master-0 kubenswrapper[32968]: I0309 16:46:58.073685 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-oauth-serving-cert\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.073981 master-0 kubenswrapper[32968]: I0309 16:46:58.073792 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-serving-cert\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.073981 master-0 kubenswrapper[32968]: I0309 16:46:58.073868 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-console-config\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.073981 master-0 kubenswrapper[32968]: I0309 16:46:58.073890 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8mvv\" (UniqueName: \"kubernetes.io/projected/994004c1-0e66-4998-a952-52e41b4637f9-kube-api-access-b8mvv\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.073981 master-0 kubenswrapper[32968]: I0309 16:46:58.073914 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-service-ca\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.073981 master-0 kubenswrapper[32968]: I0309 16:46:58.073930 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-oauth-config\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.075031 master-0 kubenswrapper[32968]: I0309 16:46:58.074977 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-oauth-serving-cert\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.075198 master-0 kubenswrapper[32968]: I0309 16:46:58.075157 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-console-config\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.075600 master-0 kubenswrapper[32968]: I0309 16:46:58.075560 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-service-ca\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.091256 master-0 kubenswrapper[32968]: I0309 16:46:58.091121 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-serving-cert\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.092541 master-0 kubenswrapper[32968]: I0309 16:46:58.091923 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-oauth-config\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.097389 master-0 kubenswrapper[32968]: I0309 16:46:58.097163 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8mvv\" (UniqueName: \"kubernetes.io/projected/994004c1-0e66-4998-a952-52e41b4637f9-kube-api-access-b8mvv\") pod \"console-869ff9c57d-w6zhf\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.185009 master-0 kubenswrapper[32968]: I0309 16:46:58.184904 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:46:58.778067 master-0 kubenswrapper[32968]: I0309 16:46:58.777860 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-869ff9c57d-w6zhf"] Mar 09 16:46:58.783031 master-0 kubenswrapper[32968]: W0309 16:46:58.782940 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod994004c1_0e66_4998_a952_52e41b4637f9.slice/crio-feaa0bd35c84ef6c466715497f9ba60dc751c24035f7e64054c6a92c28fe2a62 WatchSource:0}: Error finding container feaa0bd35c84ef6c466715497f9ba60dc751c24035f7e64054c6a92c28fe2a62: Status 404 returned error can't find the container with id feaa0bd35c84ef6c466715497f9ba60dc751c24035f7e64054c6a92c28fe2a62 Mar 09 16:46:59.247887 master-0 kubenswrapper[32968]: I0309 16:46:59.247801 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-869ff9c57d-w6zhf" event={"ID":"994004c1-0e66-4998-a952-52e41b4637f9","Type":"ContainerStarted","Data":"feaa0bd35c84ef6c466715497f9ba60dc751c24035f7e64054c6a92c28fe2a62"} Mar 09 16:46:59.359959 master-0 kubenswrapper[32968]: I0309 16:46:59.359891 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs"] Mar 09 16:46:59.360628 master-0 kubenswrapper[32968]: E0309 16:46:59.360578 32968 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" podUID="2906d129-32ed-4de4-a463-8f62c576f742" Mar 09 16:47:00.259571 master-0 kubenswrapper[32968]: I0309 16:47:00.259356 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:47:00.276451 master-0 kubenswrapper[32968]: I0309 16:47:00.276381 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412033 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-service-ca\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412122 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-trusted-ca-bundle\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412141 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-router-certs\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412158 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-serving-cert\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412184 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2906d129-32ed-4de4-a463-8f62c576f742-audit-dir\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412212 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-login\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412262 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-error\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412319 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412349 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-provider-selection\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412392 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-ocp-branding-template\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412451 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28g6j\" (UniqueName: \"kubernetes.io/projected/2906d129-32ed-4de4-a463-8f62c576f742-kube-api-access-28g6j\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.412786 master-0 kubenswrapper[32968]: I0309 16:47:00.412471 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-audit-policies\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:00.414188 master-0 kubenswrapper[32968]: I0309 16:47:00.413755 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2906d129-32ed-4de4-a463-8f62c576f742-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:47:00.414650 master-0 kubenswrapper[32968]: I0309 16:47:00.414053 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:00.415240 master-0 kubenswrapper[32968]: I0309 16:47:00.414861 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:00.415240 master-0 kubenswrapper[32968]: I0309 16:47:00.415138 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.415240 master-0 kubenswrapper[32968]: I0309 16:47:00.415169 32968 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2906d129-32ed-4de4-a463-8f62c576f742-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.415240 master-0 kubenswrapper[32968]: I0309 16:47:00.415158 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:00.417108 master-0 kubenswrapper[32968]: I0309 16:47:00.417058 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.417336 master-0 kubenswrapper[32968]: I0309 16:47:00.417263 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.418729 master-0 kubenswrapper[32968]: I0309 16:47:00.418073 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.419817 master-0 kubenswrapper[32968]: I0309 16:47:00.419733 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.419904 master-0 kubenswrapper[32968]: I0309 16:47:00.419803 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2906d129-32ed-4de4-a463-8f62c576f742-kube-api-access-28g6j" (OuterVolumeSpecName: "kube-api-access-28g6j") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "kube-api-access-28g6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:47:00.420370 master-0 kubenswrapper[32968]: I0309 16:47:00.420330 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.422681 master-0 kubenswrapper[32968]: I0309 16:47:00.422608 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.423113 master-0 kubenswrapper[32968]: I0309 16:47:00.423084 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520785 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520848 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520865 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520881 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520901 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520916 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520932 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.520941 master-0 kubenswrapper[32968]: I0309 16:47:00.520951 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.521546 master-0 kubenswrapper[32968]: I0309 16:47:00.520967 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28g6j\" (UniqueName: \"kubernetes.io/projected/2906d129-32ed-4de4-a463-8f62c576f742-kube-api-access-28g6j\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:00.521546 master-0 kubenswrapper[32968]: I0309 16:47:00.520987 32968 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:01.030774 master-0 kubenswrapper[32968]: I0309 16:47:01.030507 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:47:01.032722 master-0 kubenswrapper[32968]: I0309 16:47:01.032677 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65d7d5bfd8-ks5bs\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:47:01.132367 master-0 kubenswrapper[32968]: I0309 16:47:01.132269 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") pod \"2906d129-32ed-4de4-a463-8f62c576f742\" (UID: \"2906d129-32ed-4de4-a463-8f62c576f742\") " Mar 09 16:47:01.133087 master-0 kubenswrapper[32968]: I0309 16:47:01.133013 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2906d129-32ed-4de4-a463-8f62c576f742" (UID: "2906d129-32ed-4de4-a463-8f62c576f742"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:01.234065 master-0 kubenswrapper[32968]: I0309 16:47:01.233911 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2906d129-32ed-4de4-a463-8f62c576f742-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:01.266307 master-0 kubenswrapper[32968]: I0309 16:47:01.266227 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs" Mar 09 16:47:01.343464 master-0 kubenswrapper[32968]: I0309 16:47:01.340615 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w"] Mar 09 16:47:01.344576 master-0 kubenswrapper[32968]: I0309 16:47:01.344368 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.350755 master-0 kubenswrapper[32968]: I0309 16:47:01.349048 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs"] Mar 09 16:47:01.353693 master-0 kubenswrapper[32968]: I0309 16:47:01.353628 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-65d7d5bfd8-ks5bs"] Mar 09 16:47:01.353966 master-0 kubenswrapper[32968]: I0309 16:47:01.353732 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 09 16:47:01.354224 master-0 kubenswrapper[32968]: I0309 16:47:01.354197 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-dzbl7" Mar 09 16:47:01.354353 master-0 kubenswrapper[32968]: I0309 16:47:01.354212 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 09 16:47:01.354412 master-0 kubenswrapper[32968]: I0309 16:47:01.354382 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 09 16:47:01.354859 master-0 kubenswrapper[32968]: I0309 16:47:01.354728 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.355995 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.356533 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w"] Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.356595 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.356802 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.357297 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.357979 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.358063 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 09 16:47:01.361229 master-0 kubenswrapper[32968]: I0309 16:47:01.358629 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 09 16:47:01.385689 master-0 kubenswrapper[32968]: I0309 16:47:01.385628 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 09 16:47:01.389133 master-0 kubenswrapper[32968]: I0309 16:47:01.389085 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 09 16:47:01.440007 master-0 kubenswrapper[32968]: I0309 16:47:01.439928 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-service-ca\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440007 master-0 kubenswrapper[32968]: I0309 16:47:01.439987 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-dir\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440007 master-0 kubenswrapper[32968]: I0309 16:47:01.440008 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-router-certs\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440056 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-policies\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440090 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-login\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440108 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440137 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440172 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vwgs\" (UniqueName: \"kubernetes.io/projected/33440c62-309f-4496-a32e-d0e9ecc5aac3-kube-api-access-4vwgs\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440196 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-session\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440216 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440236 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-error\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440261 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.440460 master-0 kubenswrapper[32968]: I0309 16:47:01.440287 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.485187 master-0 kubenswrapper[32968]: I0309 16:47:01.485121 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-26hg6"] Mar 09 16:47:01.488849 master-0 kubenswrapper[32968]: I0309 16:47:01.488777 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:01.491506 master-0 kubenswrapper[32968]: I0309 16:47:01.491463 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-26hg6"] Mar 09 16:47:01.496541 master-0 kubenswrapper[32968]: I0309 16:47:01.496486 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 09 16:47:01.496692 master-0 kubenswrapper[32968]: I0309 16:47:01.496564 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-8gpxb" Mar 09 16:47:01.500564 master-0 kubenswrapper[32968]: I0309 16:47:01.500518 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 09 16:47:01.541149 master-0 kubenswrapper[32968]: I0309 16:47:01.541091 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541389 master-0 kubenswrapper[32968]: I0309 16:47:01.541153 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541389 master-0 kubenswrapper[32968]: I0309 16:47:01.541200 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vwgs\" (UniqueName: \"kubernetes.io/projected/33440c62-309f-4496-a32e-d0e9ecc5aac3-kube-api-access-4vwgs\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541389 master-0 kubenswrapper[32968]: I0309 16:47:01.541224 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-session\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541389 master-0 kubenswrapper[32968]: I0309 16:47:01.541313 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541389 master-0 kubenswrapper[32968]: I0309 16:47:01.541344 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-error\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541389 master-0 kubenswrapper[32968]: I0309 16:47:01.541378 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541413 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541481 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541543 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-service-ca\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541585 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-dir\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541617 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-router-certs\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541644 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-policies\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541675 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:01.541729 master-0 kubenswrapper[32968]: I0309 16:47:01.541713 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-login\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.543443 master-0 kubenswrapper[32968]: I0309 16:47:01.543379 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-policies\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.545591 master-0 kubenswrapper[32968]: I0309 16:47:01.544159 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.551134 master-0 kubenswrapper[32968]: I0309 16:47:01.549968 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-dir\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.553824 master-0 kubenswrapper[32968]: I0309 16:47:01.553766 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.554052 master-0 kubenswrapper[32968]: I0309 16:47:01.553994 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-service-ca\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.555104 master-0 kubenswrapper[32968]: I0309 16:47:01.555064 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.556324 master-0 kubenswrapper[32968]: I0309 16:47:01.556302 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-login\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.560486 master-0 kubenswrapper[32968]: I0309 16:47:01.560334 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-error\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.560875 master-0 kubenswrapper[32968]: I0309 16:47:01.560857 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.561891 master-0 kubenswrapper[32968]: I0309 16:47:01.561826 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-router-certs\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.562069 master-0 kubenswrapper[32968]: I0309 16:47:01.562053 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-session\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.563298 master-0 kubenswrapper[32968]: I0309 16:47:01.563243 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.575994 master-0 kubenswrapper[32968]: I0309 16:47:01.575946 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vwgs\" (UniqueName: \"kubernetes.io/projected/33440c62-309f-4496-a32e-d0e9ecc5aac3-kube-api-access-4vwgs\") pod \"oauth-openshift-58c5d7cc86-f9w2w\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.648774 master-0 kubenswrapper[32968]: I0309 16:47:01.644032 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:01.648774 master-0 kubenswrapper[32968]: I0309 16:47:01.644133 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:01.648774 master-0 kubenswrapper[32968]: I0309 16:47:01.644943 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:01.648774 master-0 kubenswrapper[32968]: E0309 16:47:01.645022 32968 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 09 16:47:01.648774 master-0 kubenswrapper[32968]: E0309 16:47:01.646689 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-networking-console-plugin-cert podName:c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997 nodeName:}" failed. No retries permitted until 2026-03-09 16:47:02.145056206 +0000 UTC m=+48.248378746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-26hg6" (UID: "c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997") : secret "networking-console-plugin-cert" not found Mar 09 16:47:01.695601 master-0 kubenswrapper[32968]: I0309 16:47:01.695535 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:01.736570 master-0 kubenswrapper[32968]: I0309 16:47:01.736035 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b698b4fc8-zx5n6"] Mar 09 16:47:01.741115 master-0 kubenswrapper[32968]: I0309 16:47:01.741058 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.755854 master-0 kubenswrapper[32968]: I0309 16:47:01.755800 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 09 16:47:01.757656 master-0 kubenswrapper[32968]: I0309 16:47:01.757269 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b698b4fc8-zx5n6"] Mar 09 16:47:01.845662 master-0 kubenswrapper[32968]: I0309 16:47:01.845615 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-oauth-serving-cert\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.845662 master-0 kubenswrapper[32968]: I0309 16:47:01.845663 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-console-config\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.845919 master-0 kubenswrapper[32968]: I0309 16:47:01.845694 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpt7z\" (UniqueName: \"kubernetes.io/projected/8fda5a84-b685-4333-858b-33123158c1e6-kube-api-access-gpt7z\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.845919 master-0 kubenswrapper[32968]: I0309 16:47:01.845723 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-oauth-config\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.845919 master-0 kubenswrapper[32968]: I0309 16:47:01.845760 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-service-ca\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.845919 master-0 kubenswrapper[32968]: I0309 16:47:01.845791 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-trusted-ca-bundle\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.845919 master-0 kubenswrapper[32968]: I0309 16:47:01.845810 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-serving-cert\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.950496 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-oauth-config\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.952657 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-service-ca\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.952779 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-trusted-ca-bundle\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.952814 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-serving-cert\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.952862 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-oauth-serving-cert\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.952895 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-console-config\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.953445 master-0 kubenswrapper[32968]: I0309 16:47:01.952942 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpt7z\" (UniqueName: \"kubernetes.io/projected/8fda5a84-b685-4333-858b-33123158c1e6-kube-api-access-gpt7z\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.957446 master-0 kubenswrapper[32968]: I0309 16:47:01.954610 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-service-ca\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.957446 master-0 kubenswrapper[32968]: I0309 16:47:01.955206 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-oauth-serving-cert\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.957446 master-0 kubenswrapper[32968]: I0309 16:47:01.956076 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-console-config\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.957446 master-0 kubenswrapper[32968]: I0309 16:47:01.956905 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-trusted-ca-bundle\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.964395 master-0 kubenswrapper[32968]: I0309 16:47:01.964347 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-oauth-config\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:01.988269 master-0 kubenswrapper[32968]: I0309 16:47:01.987375 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-serving-cert\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:02.020263 master-0 kubenswrapper[32968]: I0309 16:47:02.020211 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpt7z\" (UniqueName: \"kubernetes.io/projected/8fda5a84-b685-4333-858b-33123158c1e6-kube-api-access-gpt7z\") pod \"console-7b698b4fc8-zx5n6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:02.074570 master-0 kubenswrapper[32968]: I0309 16:47:02.072854 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:02.100904 master-0 kubenswrapper[32968]: I0309 16:47:02.100816 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2906d129-32ed-4de4-a463-8f62c576f742" path="/var/lib/kubelet/pods/2906d129-32ed-4de4-a463-8f62c576f742/volumes" Mar 09 16:47:02.156623 master-0 kubenswrapper[32968]: I0309 16:47:02.156538 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:02.162853 master-0 kubenswrapper[32968]: I0309 16:47:02.160942 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-26hg6\" (UID: \"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:02.431002 master-0 kubenswrapper[32968]: I0309 16:47:02.430929 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" Mar 09 16:47:04.713145 master-0 kubenswrapper[32968]: I0309 16:47:04.713076 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b698b4fc8-zx5n6"] Mar 09 16:47:04.718175 master-0 kubenswrapper[32968]: W0309 16:47:04.718066 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fda5a84_b685_4333_858b_33123158c1e6.slice/crio-b6c6db84395133bf9572a48bcc8e86fb8c8257d441f91abdc7d58960c1c686cf WatchSource:0}: Error finding container b6c6db84395133bf9572a48bcc8e86fb8c8257d441f91abdc7d58960c1c686cf: Status 404 returned error can't find the container with id b6c6db84395133bf9572a48bcc8e86fb8c8257d441f91abdc7d58960c1c686cf Mar 09 16:47:04.869572 master-0 kubenswrapper[32968]: I0309 16:47:04.867181 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-26hg6"] Mar 09 16:47:04.871579 master-0 kubenswrapper[32968]: I0309 16:47:04.870714 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w"] Mar 09 16:47:04.879048 master-0 kubenswrapper[32968]: W0309 16:47:04.878954 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33440c62_309f_4496_a32e_d0e9ecc5aac3.slice/crio-b2536eecfc04abaef979d74f1fba0188ae881bc894ed0d4be8aebaaecc3e1620 WatchSource:0}: Error finding container b2536eecfc04abaef979d74f1fba0188ae881bc894ed0d4be8aebaaecc3e1620: Status 404 returned error can't find the container with id b2536eecfc04abaef979d74f1fba0188ae881bc894ed0d4be8aebaaecc3e1620 Mar 09 16:47:04.881745 master-0 kubenswrapper[32968]: W0309 16:47:04.881298 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5b3b57a_7c5a_4f2e_bc2d_c5a89f60f997.slice/crio-93fcb6291279dcd1bb1404f0b692cc15cb13bd2a6e5233669913b2f0f32f3ee6 WatchSource:0}: Error finding container 93fcb6291279dcd1bb1404f0b692cc15cb13bd2a6e5233669913b2f0f32f3ee6: Status 404 returned error can't find the container with id 93fcb6291279dcd1bb1404f0b692cc15cb13bd2a6e5233669913b2f0f32f3ee6 Mar 09 16:47:05.105361 master-0 kubenswrapper[32968]: I0309 16:47:05.105200 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-869ff9c57d-w6zhf"] Mar 09 16:47:05.143480 master-0 kubenswrapper[32968]: I0309 16:47:05.142173 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bd7656797-fzjhw"] Mar 09 16:47:05.143480 master-0 kubenswrapper[32968]: I0309 16:47:05.143201 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.164771 master-0 kubenswrapper[32968]: I0309 16:47:05.164724 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bd7656797-fzjhw"] Mar 09 16:47:05.226905 master-0 kubenswrapper[32968]: I0309 16:47:05.226782 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-serving-cert\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.226905 master-0 kubenswrapper[32968]: I0309 16:47:05.226902 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc9k\" (UniqueName: \"kubernetes.io/projected/4f93fd52-1872-4223-962b-c608b2737866-kube-api-access-7rc9k\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.227208 master-0 kubenswrapper[32968]: I0309 16:47:05.226980 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-oauth-serving-cert\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.227208 master-0 kubenswrapper[32968]: I0309 16:47:05.227144 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-oauth-config\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.227373 master-0 kubenswrapper[32968]: I0309 16:47:05.227345 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-console-config\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.227443 master-0 kubenswrapper[32968]: I0309 16:47:05.227380 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-trusted-ca-bundle\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.227443 master-0 kubenswrapper[32968]: I0309 16:47:05.227433 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-service-ca\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.303905 master-0 kubenswrapper[32968]: I0309 16:47:05.303825 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" event={"ID":"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997","Type":"ContainerStarted","Data":"93fcb6291279dcd1bb1404f0b692cc15cb13bd2a6e5233669913b2f0f32f3ee6"} Mar 09 16:47:05.306497 master-0 kubenswrapper[32968]: I0309 16:47:05.306454 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b698b4fc8-zx5n6" event={"ID":"8fda5a84-b685-4333-858b-33123158c1e6","Type":"ContainerStarted","Data":"b01d0c869ea1d167a340151a19787789a61a64c5fbb67d3bf03f6f87127e32ac"} Mar 09 16:47:05.306497 master-0 kubenswrapper[32968]: I0309 16:47:05.306496 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b698b4fc8-zx5n6" event={"ID":"8fda5a84-b685-4333-858b-33123158c1e6","Type":"ContainerStarted","Data":"b6c6db84395133bf9572a48bcc8e86fb8c8257d441f91abdc7d58960c1c686cf"} Mar 09 16:47:05.309799 master-0 kubenswrapper[32968]: I0309 16:47:05.309720 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-869ff9c57d-w6zhf" event={"ID":"994004c1-0e66-4998-a952-52e41b4637f9","Type":"ContainerStarted","Data":"60994d056730817f37000028d309b3bb30d752d18db52bdf84eff4c77f2902c6"} Mar 09 16:47:05.312337 master-0 kubenswrapper[32968]: I0309 16:47:05.312240 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" event={"ID":"33440c62-309f-4496-a32e-d0e9ecc5aac3","Type":"ContainerStarted","Data":"b2536eecfc04abaef979d74f1fba0188ae881bc894ed0d4be8aebaaecc3e1620"} Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.329200 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-console-config\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.329265 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-trusted-ca-bundle\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.329499 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-service-ca\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.329627 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-serving-cert\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.329662 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc9k\" (UniqueName: \"kubernetes.io/projected/4f93fd52-1872-4223-962b-c608b2737866-kube-api-access-7rc9k\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.330082 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-oauth-serving-cert\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.329743 master-0 kubenswrapper[32968]: I0309 16:47:05.330254 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-oauth-config\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.334049 master-0 kubenswrapper[32968]: I0309 16:47:05.332906 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-console-config\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.334049 master-0 kubenswrapper[32968]: I0309 16:47:05.333239 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-oauth-serving-cert\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.334049 master-0 kubenswrapper[32968]: I0309 16:47:05.333367 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-trusted-ca-bundle\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.335493 master-0 kubenswrapper[32968]: I0309 16:47:05.334967 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-service-ca\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.335841 master-0 kubenswrapper[32968]: I0309 16:47:05.335799 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-serving-cert\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.354321 master-0 kubenswrapper[32968]: I0309 16:47:05.354230 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc9k\" (UniqueName: \"kubernetes.io/projected/4f93fd52-1872-4223-962b-c608b2737866-kube-api-access-7rc9k\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.355662 master-0 kubenswrapper[32968]: I0309 16:47:05.355396 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-oauth-config\") pod \"console-7bd7656797-fzjhw\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.368381 master-0 kubenswrapper[32968]: I0309 16:47:05.368280 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b698b4fc8-zx5n6" podStartSLOduration=4.368211387 podStartE2EDuration="4.368211387s" podCreationTimestamp="2026-03-09 16:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:47:05.335411265 +0000 UTC m=+51.438733825" watchObservedRunningTime="2026-03-09 16:47:05.368211387 +0000 UTC m=+51.471533927" Mar 09 16:47:05.502728 master-0 kubenswrapper[32968]: I0309 16:47:05.502622 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:05.962373 master-0 kubenswrapper[32968]: I0309 16:47:05.962235 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-869ff9c57d-w6zhf" podStartSLOduration=3.492596933 podStartE2EDuration="8.962214702s" podCreationTimestamp="2026-03-09 16:46:57 +0000 UTC" firstStartedPulling="2026-03-09 16:46:58.786767684 +0000 UTC m=+44.890090224" lastFinishedPulling="2026-03-09 16:47:04.256385453 +0000 UTC m=+50.359707993" observedRunningTime="2026-03-09 16:47:05.366609225 +0000 UTC m=+51.469931765" watchObservedRunningTime="2026-03-09 16:47:05.962214702 +0000 UTC m=+52.065537242" Mar 09 16:47:05.963529 master-0 kubenswrapper[32968]: I0309 16:47:05.963501 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bd7656797-fzjhw"] Mar 09 16:47:06.977932 master-0 kubenswrapper[32968]: I0309 16:47:06.977834 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c5964c98f-tm4pb"] Mar 09 16:47:06.980640 master-0 kubenswrapper[32968]: I0309 16:47:06.980545 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" containerID="cri-o://537fb6b643ee9cbd475ca32fcc8df6dda7f1359c900f2721924da0fedeca0866" gracePeriod=30 Mar 09 16:47:07.027079 master-0 kubenswrapper[32968]: I0309 16:47:07.027023 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb"] Mar 09 16:47:07.027394 master-0 kubenswrapper[32968]: I0309 16:47:07.027357 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" containerID="cri-o://3154e133fbf2500b6ea42f7db977fa73d4cbaf642b7311e9ee095fda1f327ff1" gracePeriod=30 Mar 09 16:47:07.144590 master-0 kubenswrapper[32968]: I0309 16:47:07.140815 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 09 16:47:07.144590 master-0 kubenswrapper[32968]: I0309 16:47:07.142048 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.146874 master-0 kubenswrapper[32968]: I0309 16:47:07.146825 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 16:47:07.147178 master-0 kubenswrapper[32968]: I0309 16:47:07.147081 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-cd6zf" Mar 09 16:47:07.269395 master-0 kubenswrapper[32968]: I0309 16:47:07.269186 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 09 16:47:07.274673 master-0 kubenswrapper[32968]: I0309 16:47:07.274624 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.274872 master-0 kubenswrapper[32968]: I0309 16:47:07.274743 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.274872 master-0 kubenswrapper[32968]: I0309 16:47:07.274763 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-var-lock\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.346001 master-0 kubenswrapper[32968]: I0309 16:47:07.345914 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd7656797-fzjhw" event={"ID":"4f93fd52-1872-4223-962b-c608b2737866","Type":"ContainerStarted","Data":"c921962f3fbc046e5f45d71084ca5ce183fa630fc6552964a58710fbb9010e68"} Mar 09 16:47:07.379592 master-0 kubenswrapper[32968]: I0309 16:47:07.378640 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.379920 master-0 kubenswrapper[32968]: I0309 16:47:07.379738 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.379920 master-0 kubenswrapper[32968]: I0309 16:47:07.379781 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-var-lock\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.379991 master-0 kubenswrapper[32968]: I0309 16:47:07.379869 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.379991 master-0 kubenswrapper[32968]: I0309 16:47:07.379914 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-var-lock\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.397719 master-0 kubenswrapper[32968]: I0309 16:47:07.397662 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.493676 master-0 kubenswrapper[32968]: I0309 16:47:07.493585 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:07.556911 master-0 kubenswrapper[32968]: I0309 16:47:07.556665 32968 patch_prober.go:28] interesting pod/controller-manager-5c5964c98f-tm4pb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Mar 09 16:47:07.557238 master-0 kubenswrapper[32968]: I0309 16:47:07.557129 32968 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Mar 09 16:47:08.186158 master-0 kubenswrapper[32968]: I0309 16:47:08.186082 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:47:12.073184 master-0 kubenswrapper[32968]: I0309 16:47:12.073106 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:12.073184 master-0 kubenswrapper[32968]: I0309 16:47:12.073179 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:47:12.076532 master-0 kubenswrapper[32968]: I0309 16:47:12.076456 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:47:12.076665 master-0 kubenswrapper[32968]: I0309 16:47:12.076619 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:47:14.045455 master-0 kubenswrapper[32968]: I0309 16:47:14.045390 32968 scope.go:117] "RemoveContainer" containerID="2d59ac76dc4be81acf3ade62baf431dad3208a3f0083ed9e5b09fbc150f0a9be" Mar 09 16:47:16.207285 master-0 kubenswrapper[32968]: I0309 16:47:16.206760 32968 patch_prober.go:28] interesting pod/route-controller-manager-675f85b8f7-bt9gb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 09 16:47:16.207285 master-0 kubenswrapper[32968]: I0309 16:47:16.206852 32968 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 09 16:47:17.397361 master-0 kubenswrapper[32968]: I0309 16:47:17.397297 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 09 16:47:17.555441 master-0 kubenswrapper[32968]: I0309 16:47:17.555353 32968 patch_prober.go:28] interesting pod/controller-manager-5c5964c98f-tm4pb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" start-of-body= Mar 09 16:47:17.555816 master-0 kubenswrapper[32968]: I0309 16:47:17.555463 32968 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.74:8443/healthz\": dial tcp 10.128.0.74:8443: connect: connection refused" Mar 09 16:47:22.074269 master-0 kubenswrapper[32968]: I0309 16:47:22.074170 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:47:22.075252 master-0 kubenswrapper[32968]: I0309 16:47:22.074311 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:47:24.609388 master-0 kubenswrapper[32968]: I0309 16:47:24.609202 32968 generic.go:334] "Generic (PLEG): container finished" podID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerID="3154e133fbf2500b6ea42f7db977fa73d4cbaf642b7311e9ee095fda1f327ff1" exitCode=0 Mar 09 16:47:24.609388 master-0 kubenswrapper[32968]: I0309 16:47:24.609306 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" event={"ID":"8677cbd3-649f-41cd-8b8a-eadca971906b","Type":"ContainerDied","Data":"3154e133fbf2500b6ea42f7db977fa73d4cbaf642b7311e9ee095fda1f327ff1"} Mar 09 16:47:24.609388 master-0 kubenswrapper[32968]: I0309 16:47:24.609349 32968 scope.go:117] "RemoveContainer" containerID="58ca4bfd8d3d92cf6b0638eb596cecb093134580ce5c529622e4707ab6f67862" Mar 09 16:47:24.626883 master-0 kubenswrapper[32968]: I0309 16:47:24.626821 32968 generic.go:334] "Generic (PLEG): container finished" podID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerID="537fb6b643ee9cbd475ca32fcc8df6dda7f1359c900f2721924da0fedeca0866" exitCode=0 Mar 09 16:47:24.626883 master-0 kubenswrapper[32968]: I0309 16:47:24.626882 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" event={"ID":"7d1143ae-d94a-43f2-8e75-95aae13a5c57","Type":"ContainerDied","Data":"537fb6b643ee9cbd475ca32fcc8df6dda7f1359c900f2721924da0fedeca0866"} Mar 09 16:47:24.661561 master-0 kubenswrapper[32968]: I0309 16:47:24.660853 32968 scope.go:117] "RemoveContainer" containerID="103d3eac07aecf0258cc2c832ca414dc5ada6722c47422884569884c3c3f57fc" Mar 09 16:47:24.867516 master-0 kubenswrapper[32968]: I0309 16:47:24.867478 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 09 16:47:25.033130 master-0 kubenswrapper[32968]: I0309 16:47:25.033067 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:47:25.058849 master-0 kubenswrapper[32968]: I0309 16:47:25.056076 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:47:25.175857 master-0 kubenswrapper[32968]: I0309 16:47:25.175780 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") pod \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " Mar 09 16:47:25.176074 master-0 kubenswrapper[32968]: I0309 16:47:25.175918 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") pod \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " Mar 09 16:47:25.176074 master-0 kubenswrapper[32968]: I0309 16:47:25.175972 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") pod \"8677cbd3-649f-41cd-8b8a-eadca971906b\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " Mar 09 16:47:25.176074 master-0 kubenswrapper[32968]: I0309 16:47:25.176014 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") pod \"8677cbd3-649f-41cd-8b8a-eadca971906b\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " Mar 09 16:47:25.176074 master-0 kubenswrapper[32968]: I0309 16:47:25.176068 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") pod \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " Mar 09 16:47:25.176263 master-0 kubenswrapper[32968]: I0309 16:47:25.176188 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") pod \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " Mar 09 16:47:25.176263 master-0 kubenswrapper[32968]: I0309 16:47:25.176229 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") pod \"8677cbd3-649f-41cd-8b8a-eadca971906b\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " Mar 09 16:47:25.176263 master-0 kubenswrapper[32968]: I0309 16:47:25.176255 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") pod \"8677cbd3-649f-41cd-8b8a-eadca971906b\" (UID: \"8677cbd3-649f-41cd-8b8a-eadca971906b\") " Mar 09 16:47:25.176399 master-0 kubenswrapper[32968]: I0309 16:47:25.176283 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") pod \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\" (UID: \"7d1143ae-d94a-43f2-8e75-95aae13a5c57\") " Mar 09 16:47:25.177294 master-0 kubenswrapper[32968]: I0309 16:47:25.177240 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d1143ae-d94a-43f2-8e75-95aae13a5c57" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:25.177631 master-0 kubenswrapper[32968]: I0309 16:47:25.177584 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config" (OuterVolumeSpecName: "config") pod "7d1143ae-d94a-43f2-8e75-95aae13a5c57" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:25.177714 master-0 kubenswrapper[32968]: I0309 16:47:25.177632 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d1143ae-d94a-43f2-8e75-95aae13a5c57" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:25.177818 master-0 kubenswrapper[32968]: I0309 16:47:25.177786 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca" (OuterVolumeSpecName: "client-ca") pod "8677cbd3-649f-41cd-8b8a-eadca971906b" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:25.179112 master-0 kubenswrapper[32968]: I0309 16:47:25.179084 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d1143ae-d94a-43f2-8e75-95aae13a5c57" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:25.179280 master-0 kubenswrapper[32968]: I0309 16:47:25.179201 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config" (OuterVolumeSpecName: "config") pod "8677cbd3-649f-41cd-8b8a-eadca971906b" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:25.179349 master-0 kubenswrapper[32968]: I0309 16:47:25.179234 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8677cbd3-649f-41cd-8b8a-eadca971906b" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:25.181755 master-0 kubenswrapper[32968]: I0309 16:47:25.181705 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf" (OuterVolumeSpecName: "kube-api-access-hw4zf") pod "8677cbd3-649f-41cd-8b8a-eadca971906b" (UID: "8677cbd3-649f-41cd-8b8a-eadca971906b"). InnerVolumeSpecName "kube-api-access-hw4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:47:25.182762 master-0 kubenswrapper[32968]: I0309 16:47:25.182602 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz" (OuterVolumeSpecName: "kube-api-access-rl5cz") pod "7d1143ae-d94a-43f2-8e75-95aae13a5c57" (UID: "7d1143ae-d94a-43f2-8e75-95aae13a5c57"). InnerVolumeSpecName "kube-api-access-rl5cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:47:25.279039 master-0 kubenswrapper[32968]: I0309 16:47:25.278958 32968 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d1143ae-d94a-43f2-8e75-95aae13a5c57-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279039 master-0 kubenswrapper[32968]: I0309 16:47:25.279026 32968 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279039 master-0 kubenswrapper[32968]: I0309 16:47:25.279042 32968 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8677cbd3-649f-41cd-8b8a-eadca971906b-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279039 master-0 kubenswrapper[32968]: I0309 16:47:25.279056 32968 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279668 master-0 kubenswrapper[32968]: I0309 16:47:25.279073 32968 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279668 master-0 kubenswrapper[32968]: I0309 16:47:25.279088 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl5cz\" (UniqueName: \"kubernetes.io/projected/7d1143ae-d94a-43f2-8e75-95aae13a5c57-kube-api-access-rl5cz\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279668 master-0 kubenswrapper[32968]: I0309 16:47:25.279104 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw4zf\" (UniqueName: \"kubernetes.io/projected/8677cbd3-649f-41cd-8b8a-eadca971906b-kube-api-access-hw4zf\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279668 master-0 kubenswrapper[32968]: I0309 16:47:25.279116 32968 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8677cbd3-649f-41cd-8b8a-eadca971906b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.279668 master-0 kubenswrapper[32968]: I0309 16:47:25.279129 32968 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1143ae-d94a-43f2-8e75-95aae13a5c57-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:25.336402 master-0 kubenswrapper[32968]: I0309 16:47:25.336336 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6797f84d95-xvlzk"] Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: E0309 16:47:25.336621 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: I0309 16:47:25.336636 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: E0309 16:47:25.336658 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: I0309 16:47:25.336663 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: E0309 16:47:25.336694 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: I0309 16:47:25.336701 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: E0309 16:47:25.336709 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" Mar 09 16:47:25.336771 master-0 kubenswrapper[32968]: I0309 16:47:25.336722 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" Mar 09 16:47:25.337120 master-0 kubenswrapper[32968]: I0309 16:47:25.336907 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" Mar 09 16:47:25.337120 master-0 kubenswrapper[32968]: I0309 16:47:25.336938 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" containerName="controller-manager" Mar 09 16:47:25.337120 master-0 kubenswrapper[32968]: I0309 16:47:25.336952 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" Mar 09 16:47:25.337605 master-0 kubenswrapper[32968]: I0309 16:47:25.337574 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.481982 master-0 kubenswrapper[32968]: I0309 16:47:25.481788 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba063bc-e070-4dc7-9509-00d47cd734d4-serving-cert\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.481982 master-0 kubenswrapper[32968]: I0309 16:47:25.481863 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grggm\" (UniqueName: \"kubernetes.io/projected/5ba063bc-e070-4dc7-9509-00d47cd734d4-kube-api-access-grggm\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.481982 master-0 kubenswrapper[32968]: I0309 16:47:25.481945 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-config\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.482351 master-0 kubenswrapper[32968]: I0309 16:47:25.482284 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-client-ca\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.482451 master-0 kubenswrapper[32968]: I0309 16:47:25.482401 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-proxy-ca-bundles\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.584297 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-client-ca\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.584661 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-proxy-ca-bundles\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.584812 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba063bc-e070-4dc7-9509-00d47cd734d4-serving-cert\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.584836 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grggm\" (UniqueName: \"kubernetes.io/projected/5ba063bc-e070-4dc7-9509-00d47cd734d4-kube-api-access-grggm\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.585011 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-config\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.585658 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-client-ca\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.585683 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-proxy-ca-bundles\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.586489 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba063bc-e070-4dc7-9509-00d47cd734d4-config\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.593578 master-0 kubenswrapper[32968]: I0309 16:47:25.596061 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6797f84d95-xvlzk"] Mar 09 16:47:25.600779 master-0 kubenswrapper[32968]: I0309 16:47:25.600715 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ba063bc-e070-4dc7-9509-00d47cd734d4-serving-cert\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.607543 master-0 kubenswrapper[32968]: I0309 16:47:25.607136 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 09 16:47:25.639958 master-0 kubenswrapper[32968]: I0309 16:47:25.639857 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-gh9dp" event={"ID":"4ec214b8-5e2d-48a6-bed4-7859b5c423e1","Type":"ContainerStarted","Data":"6da1e53c7a972a843940a71c231e3331337b8c0d489e2e2e87f09121bb144040"} Mar 09 16:47:25.640654 master-0 kubenswrapper[32968]: I0309 16:47:25.640193 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:47:25.642092 master-0 kubenswrapper[32968]: I0309 16:47:25.642016 32968 patch_prober.go:28] interesting pod/downloads-84f57b9877-gh9dp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 09 16:47:25.642191 master-0 kubenswrapper[32968]: I0309 16:47:25.642131 32968 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-gh9dp" podUID="4ec214b8-5e2d-48a6-bed4-7859b5c423e1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 09 16:47:25.643643 master-0 kubenswrapper[32968]: I0309 16:47:25.643575 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd7656797-fzjhw" event={"ID":"4f93fd52-1872-4223-962b-c608b2737866","Type":"ContainerStarted","Data":"6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97"} Mar 09 16:47:25.644796 master-0 kubenswrapper[32968]: I0309 16:47:25.644747 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a8c6cb24-23a5-42b4-b76a-69455d914ea4","Type":"ContainerStarted","Data":"bbaeb23015136cd4763fa71be39603288ead805764b2345b4f76c09c59404f06"} Mar 09 16:47:25.647240 master-0 kubenswrapper[32968]: I0309 16:47:25.647157 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" event={"ID":"7d1143ae-d94a-43f2-8e75-95aae13a5c57","Type":"ContainerDied","Data":"54c99acd4595efc88e774e161b1003d606fce8ae9e7b893bf3102130946bd8ca"} Mar 09 16:47:25.647240 master-0 kubenswrapper[32968]: I0309 16:47:25.647237 32968 scope.go:117] "RemoveContainer" containerID="537fb6b643ee9cbd475ca32fcc8df6dda7f1359c900f2721924da0fedeca0866" Mar 09 16:47:25.647633 master-0 kubenswrapper[32968]: I0309 16:47:25.647325 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c5964c98f-tm4pb" Mar 09 16:47:25.666715 master-0 kubenswrapper[32968]: I0309 16:47:25.666662 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" Mar 09 16:47:25.666851 master-0 kubenswrapper[32968]: I0309 16:47:25.666736 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb" event={"ID":"8677cbd3-649f-41cd-8b8a-eadca971906b","Type":"ContainerDied","Data":"6dbe08db551f1aa4c38325f3c72db4605aa7c1ae35053f4501ff98795f9a0d02"} Mar 09 16:47:25.671224 master-0 kubenswrapper[32968]: I0309 16:47:25.671149 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" event={"ID":"33440c62-309f-4496-a32e-d0e9ecc5aac3","Type":"ContainerStarted","Data":"6cb17b7f506ba153ac5078fd6f00a5489b0e9384f588c27bf1eb504aacc1079b"} Mar 09 16:47:25.671615 master-0 kubenswrapper[32968]: I0309 16:47:25.671571 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:25.674104 master-0 kubenswrapper[32968]: I0309 16:47:25.674055 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" event={"ID":"c5b3b57a-7c5a-4f2e-bc2d-c5a89f60f997","Type":"ContainerStarted","Data":"b38d000e3b88c70cfae2cf7796081c166f6ed8a41641c00bcb00691cc880e6b4"} Mar 09 16:47:25.681008 master-0 kubenswrapper[32968]: I0309 16:47:25.680719 32968 scope.go:117] "RemoveContainer" containerID="3154e133fbf2500b6ea42f7db977fa73d4cbaf642b7311e9ee095fda1f327ff1" Mar 09 16:47:25.947226 master-0 kubenswrapper[32968]: I0309 16:47:25.943195 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grggm\" (UniqueName: \"kubernetes.io/projected/5ba063bc-e070-4dc7-9509-00d47cd734d4-kube-api-access-grggm\") pod \"controller-manager-6797f84d95-xvlzk\" (UID: \"5ba063bc-e070-4dc7-9509-00d47cd734d4\") " pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:25.965782 master-0 kubenswrapper[32968]: I0309 16:47:25.965618 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:26.293071 master-0 kubenswrapper[32968]: I0309 16:47:26.292730 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:47:26.685152 master-0 kubenswrapper[32968]: I0309 16:47:26.685045 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a8c6cb24-23a5-42b4-b76a-69455d914ea4","Type":"ContainerStarted","Data":"0bca27d4af6f7194ec0be01b4cbd958cc84c07b53f4dbefbfe1331668056c492"} Mar 09 16:47:26.685811 master-0 kubenswrapper[32968]: I0309 16:47:26.685767 32968 patch_prober.go:28] interesting pod/downloads-84f57b9877-gh9dp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 09 16:47:26.685877 master-0 kubenswrapper[32968]: I0309 16:47:26.685800 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="a8c6cb24-23a5-42b4-b76a-69455d914ea4" containerName="installer" containerID="cri-o://0bca27d4af6f7194ec0be01b4cbd958cc84c07b53f4dbefbfe1331668056c492" gracePeriod=30 Mar 09 16:47:26.686027 master-0 kubenswrapper[32968]: I0309 16:47:26.685827 32968 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-gh9dp" podUID="4ec214b8-5e2d-48a6-bed4-7859b5c423e1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 09 16:47:27.075511 master-0 kubenswrapper[32968]: W0309 16:47:27.073632 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba063bc_e070_4dc7_9509_00d47cd734d4.slice/crio-584d557c3dacc757685c7a31a12d66be9adb7d7cbbd2479b764c1ee71c1a198c WatchSource:0}: Error finding container 584d557c3dacc757685c7a31a12d66be9adb7d7cbbd2479b764c1ee71c1a198c: Status 404 returned error can't find the container with id 584d557c3dacc757685c7a31a12d66be9adb7d7cbbd2479b764c1ee71c1a198c Mar 09 16:47:27.077111 master-0 kubenswrapper[32968]: I0309 16:47:27.077011 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c5964c98f-tm4pb"] Mar 09 16:47:27.079506 master-0 kubenswrapper[32968]: I0309 16:47:27.079446 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6797f84d95-xvlzk"] Mar 09 16:47:27.206461 master-0 kubenswrapper[32968]: I0309 16:47:27.202688 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c5964c98f-tm4pb"] Mar 09 16:47:27.359584 master-0 kubenswrapper[32968]: I0309 16:47:27.359523 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4"] Mar 09 16:47:27.360480 master-0 kubenswrapper[32968]: I0309 16:47:27.360464 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" containerName="route-controller-manager" Mar 09 16:47:27.361166 master-0 kubenswrapper[32968]: I0309 16:47:27.361152 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.366194 master-0 kubenswrapper[32968]: I0309 16:47:27.366101 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-4n2zt" Mar 09 16:47:27.366194 master-0 kubenswrapper[32968]: I0309 16:47:27.366160 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:47:27.366472 master-0 kubenswrapper[32968]: I0309 16:47:27.366440 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:47:27.366599 master-0 kubenswrapper[32968]: I0309 16:47:27.366549 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:47:27.366700 master-0 kubenswrapper[32968]: I0309 16:47:27.366678 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:47:27.366700 master-0 kubenswrapper[32968]: I0309 16:47:27.366690 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:47:27.434913 master-0 kubenswrapper[32968]: I0309 16:47:27.434830 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b9964e-03fb-4e2e-80b1-6576824191e5-serving-cert\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.434913 master-0 kubenswrapper[32968]: I0309 16:47:27.434910 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s44kg\" (UniqueName: \"kubernetes.io/projected/f9b9964e-03fb-4e2e-80b1-6576824191e5-kube-api-access-s44kg\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.435194 master-0 kubenswrapper[32968]: I0309 16:47:27.434950 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9b9964e-03fb-4e2e-80b1-6576824191e5-client-ca\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.435194 master-0 kubenswrapper[32968]: I0309 16:47:27.434989 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b9964e-03fb-4e2e-80b1-6576824191e5-config\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.536991 master-0 kubenswrapper[32968]: I0309 16:47:27.536920 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b9964e-03fb-4e2e-80b1-6576824191e5-serving-cert\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.536991 master-0 kubenswrapper[32968]: I0309 16:47:27.536980 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s44kg\" (UniqueName: \"kubernetes.io/projected/f9b9964e-03fb-4e2e-80b1-6576824191e5-kube-api-access-s44kg\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.536991 master-0 kubenswrapper[32968]: I0309 16:47:27.537016 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9b9964e-03fb-4e2e-80b1-6576824191e5-client-ca\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.537367 master-0 kubenswrapper[32968]: I0309 16:47:27.537051 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b9964e-03fb-4e2e-80b1-6576824191e5-config\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.538141 master-0 kubenswrapper[32968]: I0309 16:47:27.538102 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9b9964e-03fb-4e2e-80b1-6576824191e5-config\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.541155 master-0 kubenswrapper[32968]: I0309 16:47:27.541115 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b9964e-03fb-4e2e-80b1-6576824191e5-serving-cert\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.554158 master-0 kubenswrapper[32968]: I0309 16:47:27.554123 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9b9964e-03fb-4e2e-80b1-6576824191e5-client-ca\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.638288 master-0 kubenswrapper[32968]: I0309 16:47:27.638053 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4"] Mar 09 16:47:27.640347 master-0 kubenswrapper[32968]: I0309 16:47:27.640232 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" podStartSLOduration=8.920441873 podStartE2EDuration="28.640216622s" podCreationTimestamp="2026-03-09 16:46:59 +0000 UTC" firstStartedPulling="2026-03-09 16:47:04.882231075 +0000 UTC m=+50.985553615" lastFinishedPulling="2026-03-09 16:47:24.602005824 +0000 UTC m=+70.705328364" observedRunningTime="2026-03-09 16:47:27.633672148 +0000 UTC m=+73.736994688" watchObservedRunningTime="2026-03-09 16:47:27.640216622 +0000 UTC m=+73.743539162" Mar 09 16:47:27.695817 master-0 kubenswrapper[32968]: I0309 16:47:27.695774 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_a8c6cb24-23a5-42b4-b76a-69455d914ea4/installer/0.log" Mar 09 16:47:27.696398 master-0 kubenswrapper[32968]: I0309 16:47:27.696374 32968 generic.go:334] "Generic (PLEG): container finished" podID="a8c6cb24-23a5-42b4-b76a-69455d914ea4" containerID="0bca27d4af6f7194ec0be01b4cbd958cc84c07b53f4dbefbfe1331668056c492" exitCode=2 Mar 09 16:47:27.696528 master-0 kubenswrapper[32968]: I0309 16:47:27.696510 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a8c6cb24-23a5-42b4-b76a-69455d914ea4","Type":"ContainerDied","Data":"0bca27d4af6f7194ec0be01b4cbd958cc84c07b53f4dbefbfe1331668056c492"} Mar 09 16:47:27.698678 master-0 kubenswrapper[32968]: I0309 16:47:27.698636 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" event={"ID":"5ba063bc-e070-4dc7-9509-00d47cd734d4","Type":"ContainerStarted","Data":"ad0a1ba686e5fe35a25ad67bed75c3aff0170057b7e53604d9da160724f8343a"} Mar 09 16:47:27.698781 master-0 kubenswrapper[32968]: I0309 16:47:27.698689 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:27.698781 master-0 kubenswrapper[32968]: I0309 16:47:27.698703 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" event={"ID":"5ba063bc-e070-4dc7-9509-00d47cd734d4","Type":"ContainerStarted","Data":"584d557c3dacc757685c7a31a12d66be9adb7d7cbbd2479b764c1ee71c1a198c"} Mar 09 16:47:27.701952 master-0 kubenswrapper[32968]: I0309 16:47:27.701888 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s44kg\" (UniqueName: \"kubernetes.io/projected/f9b9964e-03fb-4e2e-80b1-6576824191e5-kube-api-access-s44kg\") pod \"route-controller-manager-5777869874-vlrx4\" (UID: \"f9b9964e-03fb-4e2e-80b1-6576824191e5\") " pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:27.703773 master-0 kubenswrapper[32968]: I0309 16:47:27.703751 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" Mar 09 16:47:27.717046 master-0 kubenswrapper[32968]: I0309 16:47:27.716994 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_a8c6cb24-23a5-42b4-b76a-69455d914ea4/installer/0.log" Mar 09 16:47:27.717201 master-0 kubenswrapper[32968]: I0309 16:47:27.717081 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:27.848581 master-0 kubenswrapper[32968]: I0309 16:47:27.848518 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kube-api-access\") pod \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " Mar 09 16:47:27.848581 master-0 kubenswrapper[32968]: I0309 16:47:27.848585 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kubelet-dir\") pod \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " Mar 09 16:47:27.848907 master-0 kubenswrapper[32968]: I0309 16:47:27.848645 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-var-lock\") pod \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\" (UID: \"a8c6cb24-23a5-42b4-b76a-69455d914ea4\") " Mar 09 16:47:27.848907 master-0 kubenswrapper[32968]: I0309 16:47:27.848788 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8c6cb24-23a5-42b4-b76a-69455d914ea4" (UID: "a8c6cb24-23a5-42b4-b76a-69455d914ea4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:47:27.848907 master-0 kubenswrapper[32968]: I0309 16:47:27.848857 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-var-lock" (OuterVolumeSpecName: "var-lock") pod "a8c6cb24-23a5-42b4-b76a-69455d914ea4" (UID: "a8c6cb24-23a5-42b4-b76a-69455d914ea4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:47:27.849158 master-0 kubenswrapper[32968]: I0309 16:47:27.849055 32968 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:27.849158 master-0 kubenswrapper[32968]: I0309 16:47:27.849100 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a8c6cb24-23a5-42b4-b76a-69455d914ea4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:27.851725 master-0 kubenswrapper[32968]: I0309 16:47:27.851689 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8c6cb24-23a5-42b4-b76a-69455d914ea4" (UID: "a8c6cb24-23a5-42b4-b76a-69455d914ea4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:47:27.944649 master-0 kubenswrapper[32968]: I0309 16:47:27.944520 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-26hg6" podStartSLOduration=7.280687212 podStartE2EDuration="26.944499303s" podCreationTimestamp="2026-03-09 16:47:01 +0000 UTC" firstStartedPulling="2026-03-09 16:47:04.888516592 +0000 UTC m=+50.991839132" lastFinishedPulling="2026-03-09 16:47:24.552328683 +0000 UTC m=+70.655651223" observedRunningTime="2026-03-09 16:47:27.94285064 +0000 UTC m=+74.046173190" watchObservedRunningTime="2026-03-09 16:47:27.944499303 +0000 UTC m=+74.047821843" Mar 09 16:47:27.960964 master-0 kubenswrapper[32968]: I0309 16:47:27.951657 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8c6cb24-23a5-42b4-b76a-69455d914ea4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:27.989450 master-0 kubenswrapper[32968]: I0309 16:47:27.985943 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:28.093086 master-0 kubenswrapper[32968]: I0309 16:47:28.093008 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1143ae-d94a-43f2-8e75-95aae13a5c57" path="/var/lib/kubelet/pods/7d1143ae-d94a-43f2-8e75-95aae13a5c57/volumes" Mar 09 16:47:28.728769 master-0 kubenswrapper[32968]: I0309 16:47:28.728663 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_a8c6cb24-23a5-42b4-b76a-69455d914ea4/installer/0.log" Mar 09 16:47:28.729547 master-0 kubenswrapper[32968]: I0309 16:47:28.729398 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 09 16:47:28.729952 master-0 kubenswrapper[32968]: I0309 16:47:28.729904 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"a8c6cb24-23a5-42b4-b76a-69455d914ea4","Type":"ContainerDied","Data":"bbaeb23015136cd4763fa71be39603288ead805764b2345b4f76c09c59404f06"} Mar 09 16:47:28.730017 master-0 kubenswrapper[32968]: I0309 16:47:28.729961 32968 scope.go:117] "RemoveContainer" containerID="0bca27d4af6f7194ec0be01b4cbd958cc84c07b53f4dbefbfe1331668056c492" Mar 09 16:47:28.739000 master-0 kubenswrapper[32968]: I0309 16:47:28.738894 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-gh9dp" podStartSLOduration=4.972216187 podStartE2EDuration="49.738871666s" podCreationTimestamp="2026-03-09 16:46:39 +0000 UTC" firstStartedPulling="2026-03-09 16:46:39.865458266 +0000 UTC m=+25.968780816" lastFinishedPulling="2026-03-09 16:47:24.632113755 +0000 UTC m=+70.735436295" observedRunningTime="2026-03-09 16:47:28.736960855 +0000 UTC m=+74.840283425" watchObservedRunningTime="2026-03-09 16:47:28.738871666 +0000 UTC m=+74.842194206" Mar 09 16:47:29.411461 master-0 kubenswrapper[32968]: I0309 16:47:29.411351 32968 patch_prober.go:28] interesting pod/downloads-84f57b9877-gh9dp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 09 16:47:29.411461 master-0 kubenswrapper[32968]: I0309 16:47:29.411406 32968 patch_prober.go:28] interesting pod/downloads-84f57b9877-gh9dp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 09 16:47:29.411907 master-0 kubenswrapper[32968]: I0309 16:47:29.411480 32968 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-gh9dp" podUID="4ec214b8-5e2d-48a6-bed4-7859b5c423e1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 09 16:47:29.411907 master-0 kubenswrapper[32968]: I0309 16:47:29.411496 32968 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-gh9dp" podUID="4ec214b8-5e2d-48a6-bed4-7859b5c423e1" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 09 16:47:29.698020 master-0 kubenswrapper[32968]: I0309 16:47:29.697873 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 09 16:47:29.698590 master-0 kubenswrapper[32968]: E0309 16:47:29.698556 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c6cb24-23a5-42b4-b76a-69455d914ea4" containerName="installer" Mar 09 16:47:29.698639 master-0 kubenswrapper[32968]: I0309 16:47:29.698591 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c6cb24-23a5-42b4-b76a-69455d914ea4" containerName="installer" Mar 09 16:47:29.698959 master-0 kubenswrapper[32968]: I0309 16:47:29.698918 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c6cb24-23a5-42b4-b76a-69455d914ea4" containerName="installer" Mar 09 16:47:29.699912 master-0 kubenswrapper[32968]: I0309 16:47:29.699849 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.710083 master-0 kubenswrapper[32968]: I0309 16:47:29.710002 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-cd6zf" Mar 09 16:47:29.724041 master-0 kubenswrapper[32968]: I0309 16:47:29.723986 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 09 16:47:29.796221 master-0 kubenswrapper[32968]: I0309 16:47:29.790250 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kube-api-access\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.796221 master-0 kubenswrapper[32968]: I0309 16:47:29.790440 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-var-lock\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.796221 master-0 kubenswrapper[32968]: I0309 16:47:29.790470 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.891791 master-0 kubenswrapper[32968]: I0309 16:47:29.891641 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-var-lock\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.891791 master-0 kubenswrapper[32968]: I0309 16:47:29.891787 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.891791 master-0 kubenswrapper[32968]: I0309 16:47:29.891726 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-var-lock\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.892299 master-0 kubenswrapper[32968]: I0309 16:47:29.891907 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kube-api-access\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:29.892299 master-0 kubenswrapper[32968]: I0309 16:47:29.891917 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:30.278111 master-0 kubenswrapper[32968]: I0309 16:47:30.278018 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 09 16:47:31.302180 master-0 kubenswrapper[32968]: I0309 16:47:31.300258 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4"] Mar 09 16:47:31.309370 master-0 kubenswrapper[32968]: W0309 16:47:31.309297 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b9964e_03fb_4e2e_80b1_6576824191e5.slice/crio-e970c79cf8d38e1c3fb12c4720b05615b1e01636dc8f23b4507202ec13ab9080 WatchSource:0}: Error finding container e970c79cf8d38e1c3fb12c4720b05615b1e01636dc8f23b4507202ec13ab9080: Status 404 returned error can't find the container with id e970c79cf8d38e1c3fb12c4720b05615b1e01636dc8f23b4507202ec13ab9080 Mar 09 16:47:31.371268 master-0 kubenswrapper[32968]: I0309 16:47:31.371194 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-869ff9c57d-w6zhf" podUID="994004c1-0e66-4998-a952-52e41b4637f9" containerName="console" containerID="cri-o://60994d056730817f37000028d309b3bb30d752d18db52bdf84eff4c77f2902c6" gracePeriod=15 Mar 09 16:47:31.845904 master-0 kubenswrapper[32968]: I0309 16:47:31.830934 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kube-api-access\") pod \"installer-6-master-0\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:31.874092 master-0 kubenswrapper[32968]: I0309 16:47:31.873964 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-869ff9c57d-w6zhf_994004c1-0e66-4998-a952-52e41b4637f9/console/0.log" Mar 09 16:47:31.874092 master-0 kubenswrapper[32968]: I0309 16:47:31.874063 32968 generic.go:334] "Generic (PLEG): container finished" podID="994004c1-0e66-4998-a952-52e41b4637f9" containerID="60994d056730817f37000028d309b3bb30d752d18db52bdf84eff4c77f2902c6" exitCode=2 Mar 09 16:47:31.875140 master-0 kubenswrapper[32968]: I0309 16:47:31.874234 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-869ff9c57d-w6zhf" event={"ID":"994004c1-0e66-4998-a952-52e41b4637f9","Type":"ContainerDied","Data":"60994d056730817f37000028d309b3bb30d752d18db52bdf84eff4c77f2902c6"} Mar 09 16:47:31.876762 master-0 kubenswrapper[32968]: I0309 16:47:31.876687 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" event={"ID":"f9b9964e-03fb-4e2e-80b1-6576824191e5","Type":"ContainerStarted","Data":"c86eb1082c4db8b4244e74ec85342259ae62138a2b4c7541a220a0e2d8aac8d9"} Mar 09 16:47:31.876762 master-0 kubenswrapper[32968]: I0309 16:47:31.876737 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" event={"ID":"f9b9964e-03fb-4e2e-80b1-6576824191e5","Type":"ContainerStarted","Data":"e970c79cf8d38e1c3fb12c4720b05615b1e01636dc8f23b4507202ec13ab9080"} Mar 09 16:47:32.074758 master-0 kubenswrapper[32968]: I0309 16:47:32.074545 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:47:32.074758 master-0 kubenswrapper[32968]: I0309 16:47:32.074627 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:47:32.127687 master-0 kubenswrapper[32968]: I0309 16:47:32.127607 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:47:32.231265 master-0 kubenswrapper[32968]: I0309 16:47:32.229230 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb"] Mar 09 16:47:32.663526 master-0 kubenswrapper[32968]: I0309 16:47:32.651109 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-675f85b8f7-bt9gb"] Mar 09 16:47:32.731759 master-0 kubenswrapper[32968]: I0309 16:47:32.731706 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-869ff9c57d-w6zhf_994004c1-0e66-4998-a952-52e41b4637f9/console/0.log" Mar 09 16:47:32.731976 master-0 kubenswrapper[32968]: I0309 16:47:32.731797 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:47:32.870923 master-0 kubenswrapper[32968]: I0309 16:47:32.870856 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8mvv\" (UniqueName: \"kubernetes.io/projected/994004c1-0e66-4998-a952-52e41b4637f9-kube-api-access-b8mvv\") pod \"994004c1-0e66-4998-a952-52e41b4637f9\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " Mar 09 16:47:32.871272 master-0 kubenswrapper[32968]: I0309 16:47:32.870982 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-serving-cert\") pod \"994004c1-0e66-4998-a952-52e41b4637f9\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " Mar 09 16:47:32.871272 master-0 kubenswrapper[32968]: I0309 16:47:32.871061 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-oauth-config\") pod \"994004c1-0e66-4998-a952-52e41b4637f9\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " Mar 09 16:47:32.871272 master-0 kubenswrapper[32968]: I0309 16:47:32.871091 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-service-ca\") pod \"994004c1-0e66-4998-a952-52e41b4637f9\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " Mar 09 16:47:32.871272 master-0 kubenswrapper[32968]: I0309 16:47:32.871127 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-oauth-serving-cert\") pod \"994004c1-0e66-4998-a952-52e41b4637f9\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " Mar 09 16:47:32.871272 master-0 kubenswrapper[32968]: I0309 16:47:32.871158 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-console-config\") pod \"994004c1-0e66-4998-a952-52e41b4637f9\" (UID: \"994004c1-0e66-4998-a952-52e41b4637f9\") " Mar 09 16:47:32.872465 master-0 kubenswrapper[32968]: I0309 16:47:32.872350 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-service-ca" (OuterVolumeSpecName: "service-ca") pod "994004c1-0e66-4998-a952-52e41b4637f9" (UID: "994004c1-0e66-4998-a952-52e41b4637f9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:32.872547 master-0 kubenswrapper[32968]: I0309 16:47:32.872375 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-console-config" (OuterVolumeSpecName: "console-config") pod "994004c1-0e66-4998-a952-52e41b4637f9" (UID: "994004c1-0e66-4998-a952-52e41b4637f9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:32.872547 master-0 kubenswrapper[32968]: I0309 16:47:32.872431 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "994004c1-0e66-4998-a952-52e41b4637f9" (UID: "994004c1-0e66-4998-a952-52e41b4637f9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:47:32.874491 master-0 kubenswrapper[32968]: I0309 16:47:32.874422 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "994004c1-0e66-4998-a952-52e41b4637f9" (UID: "994004c1-0e66-4998-a952-52e41b4637f9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:32.874900 master-0 kubenswrapper[32968]: I0309 16:47:32.874865 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "994004c1-0e66-4998-a952-52e41b4637f9" (UID: "994004c1-0e66-4998-a952-52e41b4637f9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:47:32.875897 master-0 kubenswrapper[32968]: I0309 16:47:32.875859 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994004c1-0e66-4998-a952-52e41b4637f9-kube-api-access-b8mvv" (OuterVolumeSpecName: "kube-api-access-b8mvv") pod "994004c1-0e66-4998-a952-52e41b4637f9" (UID: "994004c1-0e66-4998-a952-52e41b4637f9"). InnerVolumeSpecName "kube-api-access-b8mvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:47:32.885026 master-0 kubenswrapper[32968]: I0309 16:47:32.884983 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-869ff9c57d-w6zhf_994004c1-0e66-4998-a952-52e41b4637f9/console/0.log" Mar 09 16:47:32.885715 master-0 kubenswrapper[32968]: I0309 16:47:32.885660 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-869ff9c57d-w6zhf" event={"ID":"994004c1-0e66-4998-a952-52e41b4637f9","Type":"ContainerDied","Data":"feaa0bd35c84ef6c466715497f9ba60dc751c24035f7e64054c6a92c28fe2a62"} Mar 09 16:47:32.885803 master-0 kubenswrapper[32968]: I0309 16:47:32.885726 32968 scope.go:117] "RemoveContainer" containerID="60994d056730817f37000028d309b3bb30d752d18db52bdf84eff4c77f2902c6" Mar 09 16:47:32.885803 master-0 kubenswrapper[32968]: I0309 16:47:32.885734 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-869ff9c57d-w6zhf" Mar 09 16:47:32.886053 master-0 kubenswrapper[32968]: I0309 16:47:32.885987 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:32.973372 master-0 kubenswrapper[32968]: I0309 16:47:32.973188 32968 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-console-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:32.973372 master-0 kubenswrapper[32968]: I0309 16:47:32.973260 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8mvv\" (UniqueName: \"kubernetes.io/projected/994004c1-0e66-4998-a952-52e41b4637f9-kube-api-access-b8mvv\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:32.973691 master-0 kubenswrapper[32968]: I0309 16:47:32.973448 32968 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:32.973748 master-0 kubenswrapper[32968]: I0309 16:47:32.973687 32968 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/994004c1-0e66-4998-a952-52e41b4637f9-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:32.973748 master-0 kubenswrapper[32968]: I0309 16:47:32.973720 32968 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:32.973748 master-0 kubenswrapper[32968]: I0309 16:47:32.973739 32968 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/994004c1-0e66-4998-a952-52e41b4637f9-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:47:33.311633 master-0 kubenswrapper[32968]: I0309 16:47:33.311537 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bd7656797-fzjhw" podStartSLOduration=28.311509754 podStartE2EDuration="28.311509754s" podCreationTimestamp="2026-03-09 16:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:47:33.310459937 +0000 UTC m=+79.413782477" watchObservedRunningTime="2026-03-09 16:47:33.311509754 +0000 UTC m=+79.414832294" Mar 09 16:47:33.337970 master-0 kubenswrapper[32968]: I0309 16:47:33.337914 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" Mar 09 16:47:33.801665 master-0 kubenswrapper[32968]: I0309 16:47:33.801544 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 09 16:47:33.895489 master-0 kubenswrapper[32968]: I0309 16:47:33.895379 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df","Type":"ContainerStarted","Data":"8d054c705e740247388e519eb180b5a68338ba2f8ff895452346390a63e235d5"} Mar 09 16:47:34.094627 master-0 kubenswrapper[32968]: I0309 16:47:34.094530 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8677cbd3-649f-41cd-8b8a-eadca971906b" path="/var/lib/kubelet/pods/8677cbd3-649f-41cd-8b8a-eadca971906b/volumes" Mar 09 16:47:34.858314 master-0 kubenswrapper[32968]: I0309 16:47:34.852788 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w"] Mar 09 16:47:34.939310 master-0 kubenswrapper[32968]: I0309 16:47:34.938532 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df","Type":"ContainerStarted","Data":"6d443b8c7701a1e5519d5ef466702beaf453aec2a3b563de331d4699fd727652"} Mar 09 16:47:34.956562 master-0 kubenswrapper[32968]: I0309 16:47:34.951532 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-869ff9c57d-w6zhf"] Mar 09 16:47:34.956562 master-0 kubenswrapper[32968]: I0309 16:47:34.952691 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-869ff9c57d-w6zhf"] Mar 09 16:47:35.039546 master-0 kubenswrapper[32968]: I0309 16:47:35.035581 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6797f84d95-xvlzk" podStartSLOduration=28.035563337 podStartE2EDuration="28.035563337s" podCreationTimestamp="2026-03-09 16:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:47:35.034103449 +0000 UTC m=+81.137425999" watchObservedRunningTime="2026-03-09 16:47:35.035563337 +0000 UTC m=+81.138885877" Mar 09 16:47:35.069614 master-0 kubenswrapper[32968]: I0309 16:47:35.064552 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5777869874-vlrx4" podStartSLOduration=28.064527628 podStartE2EDuration="28.064527628s" podCreationTimestamp="2026-03-09 16:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:47:35.060368697 +0000 UTC m=+81.163691237" watchObservedRunningTime="2026-03-09 16:47:35.064527628 +0000 UTC m=+81.167850168" Mar 09 16:47:35.152581 master-0 kubenswrapper[32968]: I0309 16:47:35.144164 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 09 16:47:35.165409 master-0 kubenswrapper[32968]: I0309 16:47:35.165151 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 09 16:47:35.190144 master-0 kubenswrapper[32968]: I0309 16:47:35.190032 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=7.190006975 podStartE2EDuration="7.190006975s" podCreationTimestamp="2026-03-09 16:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:47:35.179208098 +0000 UTC m=+81.282530638" watchObservedRunningTime="2026-03-09 16:47:35.190006975 +0000 UTC m=+81.293329515" Mar 09 16:47:35.503213 master-0 kubenswrapper[32968]: I0309 16:47:35.503139 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:35.503213 master-0 kubenswrapper[32968]: I0309 16:47:35.503203 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:47:35.626734 master-0 kubenswrapper[32968]: I0309 16:47:35.626663 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:47:35.626972 master-0 kubenswrapper[32968]: I0309 16:47:35.626759 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:47:36.095012 master-0 kubenswrapper[32968]: I0309 16:47:36.094941 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994004c1-0e66-4998-a952-52e41b4637f9" path="/var/lib/kubelet/pods/994004c1-0e66-4998-a952-52e41b4637f9/volumes" Mar 09 16:47:36.095587 master-0 kubenswrapper[32968]: I0309 16:47:36.095557 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c6cb24-23a5-42b4-b76a-69455d914ea4" path="/var/lib/kubelet/pods/a8c6cb24-23a5-42b4-b76a-69455d914ea4/volumes" Mar 09 16:47:39.421222 master-0 kubenswrapper[32968]: I0309 16:47:39.421130 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-gh9dp" Mar 09 16:47:42.074390 master-0 kubenswrapper[32968]: I0309 16:47:42.074296 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:47:42.074390 master-0 kubenswrapper[32968]: I0309 16:47:42.074373 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:47:45.503544 master-0 kubenswrapper[32968]: I0309 16:47:45.503451 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:47:45.504483 master-0 kubenswrapper[32968]: I0309 16:47:45.503568 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:47:52.074438 master-0 kubenswrapper[32968]: I0309 16:47:52.074335 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:47:52.075195 master-0 kubenswrapper[32968]: I0309 16:47:52.074457 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:47:55.504038 master-0 kubenswrapper[32968]: I0309 16:47:55.503957 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:47:55.504783 master-0 kubenswrapper[32968]: I0309 16:47:55.504083 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:47:59.947865 master-0 kubenswrapper[32968]: I0309 16:47:59.947748 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" podUID="33440c62-309f-4496-a32e-d0e9ecc5aac3" containerName="oauth-openshift" containerID="cri-o://6cb17b7f506ba153ac5078fd6f00a5489b0e9384f588c27bf1eb504aacc1079b" gracePeriod=15 Mar 09 16:48:00.127790 master-0 kubenswrapper[32968]: I0309 16:48:00.127717 32968 generic.go:334] "Generic (PLEG): container finished" podID="33440c62-309f-4496-a32e-d0e9ecc5aac3" containerID="6cb17b7f506ba153ac5078fd6f00a5489b0e9384f588c27bf1eb504aacc1079b" exitCode=0 Mar 09 16:48:00.127916 master-0 kubenswrapper[32968]: I0309 16:48:00.127813 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" event={"ID":"33440c62-309f-4496-a32e-d0e9ecc5aac3","Type":"ContainerDied","Data":"6cb17b7f506ba153ac5078fd6f00a5489b0e9384f588c27bf1eb504aacc1079b"} Mar 09 16:48:00.490048 master-0 kubenswrapper[32968]: I0309 16:48:00.489904 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:48:00.610335 master-0 kubenswrapper[32968]: I0309 16:48:00.610251 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-664c7f89f8-pnp4s"] Mar 09 16:48:00.610629 master-0 kubenswrapper[32968]: E0309 16:48:00.610588 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994004c1-0e66-4998-a952-52e41b4637f9" containerName="console" Mar 09 16:48:00.610629 master-0 kubenswrapper[32968]: I0309 16:48:00.610602 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="994004c1-0e66-4998-a952-52e41b4637f9" containerName="console" Mar 09 16:48:00.610629 master-0 kubenswrapper[32968]: E0309 16:48:00.610622 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33440c62-309f-4496-a32e-d0e9ecc5aac3" containerName="oauth-openshift" Mar 09 16:48:00.610629 master-0 kubenswrapper[32968]: I0309 16:48:00.610628 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="33440c62-309f-4496-a32e-d0e9ecc5aac3" containerName="oauth-openshift" Mar 09 16:48:00.616613 master-0 kubenswrapper[32968]: I0309 16:48:00.616565 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="33440c62-309f-4496-a32e-d0e9ecc5aac3" containerName="oauth-openshift" Mar 09 16:48:00.616886 master-0 kubenswrapper[32968]: I0309 16:48:00.616867 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="994004c1-0e66-4998-a952-52e41b4637f9" containerName="console" Mar 09 16:48:00.617603 master-0 kubenswrapper[32968]: I0309 16:48:00.617583 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.634752 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-error\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.634806 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-dir\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.634909 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-router-certs\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.634926 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-serving-cert\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.634951 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-cliconfig\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.634994 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-provider-selection\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.635043 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-policies\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.635155 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vwgs\" (UniqueName: \"kubernetes.io/projected/33440c62-309f-4496-a32e-d0e9ecc5aac3-kube-api-access-4vwgs\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.635178 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-session\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635214 master-0 kubenswrapper[32968]: I0309 16:48:00.635202 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-trusted-ca-bundle\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635729 master-0 kubenswrapper[32968]: I0309 16:48:00.635243 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-ocp-branding-template\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635729 master-0 kubenswrapper[32968]: I0309 16:48:00.635280 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-login\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.635729 master-0 kubenswrapper[32968]: I0309 16:48:00.635310 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-service-ca\") pod \"33440c62-309f-4496-a32e-d0e9ecc5aac3\" (UID: \"33440c62-309f-4496-a32e-d0e9ecc5aac3\") " Mar 09 16:48:00.636380 master-0 kubenswrapper[32968]: I0309 16:48:00.636341 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:48:00.637018 master-0 kubenswrapper[32968]: I0309 16:48:00.636932 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:48:00.637648 master-0 kubenswrapper[32968]: I0309 16:48:00.637619 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:48:00.639939 master-0 kubenswrapper[32968]: I0309 16:48:00.637982 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:48:00.639939 master-0 kubenswrapper[32968]: I0309 16:48:00.639472 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:48:00.641738 master-0 kubenswrapper[32968]: I0309 16:48:00.641677 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.641738 master-0 kubenswrapper[32968]: I0309 16:48:00.641702 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.642384 master-0 kubenswrapper[32968]: I0309 16:48:00.642336 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.642912 master-0 kubenswrapper[32968]: I0309 16:48:00.642879 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.643490 master-0 kubenswrapper[32968]: I0309 16:48:00.643398 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.643658 master-0 kubenswrapper[32968]: I0309 16:48:00.643614 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.644900 master-0 kubenswrapper[32968]: I0309 16:48:00.644090 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:48:00.648723 master-0 kubenswrapper[32968]: I0309 16:48:00.645756 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33440c62-309f-4496-a32e-d0e9ecc5aac3-kube-api-access-4vwgs" (OuterVolumeSpecName: "kube-api-access-4vwgs") pod "33440c62-309f-4496-a32e-d0e9ecc5aac3" (UID: "33440c62-309f-4496-a32e-d0e9ecc5aac3"). InnerVolumeSpecName "kube-api-access-4vwgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:48:00.654732 master-0 kubenswrapper[32968]: I0309 16:48:00.653108 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-664c7f89f8-pnp4s"] Mar 09 16:48:00.737150 master-0 kubenswrapper[32968]: I0309 16:48:00.737055 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.737398 master-0 kubenswrapper[32968]: I0309 16:48:00.737262 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-login\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.737534 master-0 kubenswrapper[32968]: I0309 16:48:00.737414 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.737639 master-0 kubenswrapper[32968]: I0309 16:48:00.737606 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-audit-policies\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.737705 master-0 kubenswrapper[32968]: I0309 16:48:00.737679 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-audit-dir\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.737859 master-0 kubenswrapper[32968]: I0309 16:48:00.737823 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-error\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.737959 master-0 kubenswrapper[32968]: I0309 16:48:00.737903 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738037 master-0 kubenswrapper[32968]: I0309 16:48:00.738012 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738106 master-0 kubenswrapper[32968]: I0309 16:48:00.738080 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrlg2\" (UniqueName: \"kubernetes.io/projected/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-kube-api-access-hrlg2\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738190 master-0 kubenswrapper[32968]: I0309 16:48:00.738161 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738247 master-0 kubenswrapper[32968]: I0309 16:48:00.738225 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-session\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738361 master-0 kubenswrapper[32968]: I0309 16:48:00.738332 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-service-ca\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738461 master-0 kubenswrapper[32968]: I0309 16:48:00.738443 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-router-certs\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.738624 master-0 kubenswrapper[32968]: I0309 16:48:00.738598 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738624 master-0 kubenswrapper[32968]: I0309 16:48:00.738622 32968 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738693 master-0 kubenswrapper[32968]: I0309 16:48:00.738640 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vwgs\" (UniqueName: \"kubernetes.io/projected/33440c62-309f-4496-a32e-d0e9ecc5aac3-kube-api-access-4vwgs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738693 master-0 kubenswrapper[32968]: I0309 16:48:00.738654 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738693 master-0 kubenswrapper[32968]: I0309 16:48:00.738665 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738693 master-0 kubenswrapper[32968]: I0309 16:48:00.738676 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738693 master-0 kubenswrapper[32968]: I0309 16:48:00.738686 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738693 master-0 kubenswrapper[32968]: I0309 16:48:00.738696 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738877 master-0 kubenswrapper[32968]: I0309 16:48:00.738712 32968 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33440c62-309f-4496-a32e-d0e9ecc5aac3-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738877 master-0 kubenswrapper[32968]: I0309 16:48:00.738723 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738877 master-0 kubenswrapper[32968]: I0309 16:48:00.738736 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738877 master-0 kubenswrapper[32968]: I0309 16:48:00.738747 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.738877 master-0 kubenswrapper[32968]: I0309 16:48:00.738757 32968 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33440c62-309f-4496-a32e-d0e9ecc5aac3-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:00.840759 master-0 kubenswrapper[32968]: I0309 16:48:00.840591 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.840759 master-0 kubenswrapper[32968]: I0309 16:48:00.840676 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrlg2\" (UniqueName: \"kubernetes.io/projected/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-kube-api-access-hrlg2\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.841728 master-0 kubenswrapper[32968]: I0309 16:48:00.841610 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.841846 master-0 kubenswrapper[32968]: I0309 16:48:00.841800 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-session\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.841921 master-0 kubenswrapper[32968]: I0309 16:48:00.841889 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-service-ca\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.842967 master-0 kubenswrapper[32968]: I0309 16:48:00.842887 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-router-certs\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843082 master-0 kubenswrapper[32968]: I0309 16:48:00.843011 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843082 master-0 kubenswrapper[32968]: I0309 16:48:00.842924 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843237 master-0 kubenswrapper[32968]: I0309 16:48:00.843084 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-login\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843237 master-0 kubenswrapper[32968]: I0309 16:48:00.843152 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843237 master-0 kubenswrapper[32968]: I0309 16:48:00.843227 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-audit-policies\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843578 master-0 kubenswrapper[32968]: I0309 16:48:00.843272 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-audit-dir\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843578 master-0 kubenswrapper[32968]: I0309 16:48:00.843384 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-error\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843578 master-0 kubenswrapper[32968]: I0309 16:48:00.843463 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843578 master-0 kubenswrapper[32968]: I0309 16:48:00.843480 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-service-ca\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.843833 master-0 kubenswrapper[32968]: I0309 16:48:00.843652 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-audit-dir\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.844740 master-0 kubenswrapper[32968]: I0309 16:48:00.844673 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-audit-policies\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.845842 master-0 kubenswrapper[32968]: I0309 16:48:00.845342 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.847213 master-0 kubenswrapper[32968]: I0309 16:48:00.847159 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-session\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.847335 master-0 kubenswrapper[32968]: I0309 16:48:00.847251 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-error\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.847480 master-0 kubenswrapper[32968]: I0309 16:48:00.847410 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.847621 master-0 kubenswrapper[32968]: I0309 16:48:00.847570 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-router-certs\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.847805 master-0 kubenswrapper[32968]: I0309 16:48:00.847755 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.848912 master-0 kubenswrapper[32968]: I0309 16:48:00.848871 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-login\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.849543 master-0 kubenswrapper[32968]: I0309 16:48:00.849480 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.862967 master-0 kubenswrapper[32968]: I0309 16:48:00.862909 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrlg2\" (UniqueName: \"kubernetes.io/projected/ed31cccb-d67d-4e66-ba14-d658ae8b4b4d-kube-api-access-hrlg2\") pod \"oauth-openshift-664c7f89f8-pnp4s\" (UID: \"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d\") " pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:00.996469 master-0 kubenswrapper[32968]: I0309 16:48:00.996380 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:01.138521 master-0 kubenswrapper[32968]: I0309 16:48:01.138461 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" event={"ID":"33440c62-309f-4496-a32e-d0e9ecc5aac3","Type":"ContainerDied","Data":"b2536eecfc04abaef979d74f1fba0188ae881bc894ed0d4be8aebaaecc3e1620"} Mar 09 16:48:01.138521 master-0 kubenswrapper[32968]: I0309 16:48:01.138502 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w" Mar 09 16:48:01.138823 master-0 kubenswrapper[32968]: I0309 16:48:01.138546 32968 scope.go:117] "RemoveContainer" containerID="6cb17b7f506ba153ac5078fd6f00a5489b0e9384f588c27bf1eb504aacc1079b" Mar 09 16:48:01.271492 master-0 kubenswrapper[32968]: I0309 16:48:01.270292 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w"] Mar 09 16:48:01.315315 master-0 kubenswrapper[32968]: I0309 16:48:01.315233 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-58c5d7cc86-f9w2w"] Mar 09 16:48:01.632874 master-0 kubenswrapper[32968]: I0309 16:48:01.632817 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-664c7f89f8-pnp4s"] Mar 09 16:48:01.641167 master-0 kubenswrapper[32968]: W0309 16:48:01.641109 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded31cccb_d67d_4e66_ba14_d658ae8b4b4d.slice/crio-332809b096d495e9e8b2f29c81066f9b5a9ef182ac091349af8f26a27dc6b3ca WatchSource:0}: Error finding container 332809b096d495e9e8b2f29c81066f9b5a9ef182ac091349af8f26a27dc6b3ca: Status 404 returned error can't find the container with id 332809b096d495e9e8b2f29c81066f9b5a9ef182ac091349af8f26a27dc6b3ca Mar 09 16:48:02.074395 master-0 kubenswrapper[32968]: I0309 16:48:02.074325 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:48:02.075047 master-0 kubenswrapper[32968]: I0309 16:48:02.074396 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:48:02.095784 master-0 kubenswrapper[32968]: I0309 16:48:02.095697 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33440c62-309f-4496-a32e-d0e9ecc5aac3" path="/var/lib/kubelet/pods/33440c62-309f-4496-a32e-d0e9ecc5aac3/volumes" Mar 09 16:48:02.157069 master-0 kubenswrapper[32968]: I0309 16:48:02.156859 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" event={"ID":"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d","Type":"ContainerStarted","Data":"28a032e88fc634533d52a9da36200fe8ed24c3737b20505bbb3335bcf01c0cd6"} Mar 09 16:48:02.157069 master-0 kubenswrapper[32968]: I0309 16:48:02.156947 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" event={"ID":"ed31cccb-d67d-4e66-ba14-d658ae8b4b4d","Type":"ContainerStarted","Data":"332809b096d495e9e8b2f29c81066f9b5a9ef182ac091349af8f26a27dc6b3ca"} Mar 09 16:48:02.157907 master-0 kubenswrapper[32968]: I0309 16:48:02.157848 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:02.292106 master-0 kubenswrapper[32968]: I0309 16:48:02.291998 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" Mar 09 16:48:02.940877 master-0 kubenswrapper[32968]: I0309 16:48:02.940792 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-664c7f89f8-pnp4s" podStartSLOduration=28.940759884 podStartE2EDuration="28.940759884s" podCreationTimestamp="2026-03-09 16:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:48:02.22117664 +0000 UTC m=+108.324499200" watchObservedRunningTime="2026-03-09 16:48:02.940759884 +0000 UTC m=+109.044082424" Mar 09 16:48:05.504378 master-0 kubenswrapper[32968]: I0309 16:48:05.504306 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:48:05.505408 master-0 kubenswrapper[32968]: I0309 16:48:05.504398 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:48:12.075040 master-0 kubenswrapper[32968]: I0309 16:48:12.074314 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:48:12.075912 master-0 kubenswrapper[32968]: I0309 16:48:12.075197 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:48:12.735084 master-0 kubenswrapper[32968]: I0309 16:48:12.734994 32968 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:48:12.736112 master-0 kubenswrapper[32968]: I0309 16:48:12.736077 32968 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 09 16:48:12.736735 master-0 kubenswrapper[32968]: I0309 16:48:12.736703 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.737084 master-0 kubenswrapper[32968]: I0309 16:48:12.736992 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9" gracePeriod=15 Mar 09 16:48:12.737237 master-0 kubenswrapper[32968]: I0309 16:48:12.737093 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137" gracePeriod=15 Mar 09 16:48:12.737364 master-0 kubenswrapper[32968]: I0309 16:48:12.737011 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4" gracePeriod=15 Mar 09 16:48:12.737364 master-0 kubenswrapper[32968]: I0309 16:48:12.737331 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" containerID="cri-o://87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49" gracePeriod=15 Mar 09 16:48:12.737466 master-0 kubenswrapper[32968]: I0309 16:48:12.737121 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" containerID="cri-o://faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58" gracePeriod=15 Mar 09 16:48:12.737466 master-0 kubenswrapper[32968]: I0309 16:48:12.737024 32968 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: E0309 16:48:12.739031 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739073 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: E0309 16:48:12.739135 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739147 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: E0309 16:48:12.739160 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739171 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: E0309 16:48:12.739197 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739204 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: E0309 16:48:12.739224 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739233 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: E0309 16:48:12.739251 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739259 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739437 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-regeneration-controller" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739451 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739476 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-insecure-readyz" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739489 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="setup" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739520 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-cert-syncer" Mar 09 16:48:12.739678 master-0 kubenswrapper[32968]: I0309 16:48:12.739529 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="48512e02022680c9d90092634f0fc146" containerName="kube-apiserver-check-endpoints" Mar 09 16:48:12.857845 master-0 kubenswrapper[32968]: E0309 16:48:12.857166 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.862451 master-0 kubenswrapper[32968]: I0309 16:48:12.862295 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.864189 master-0 kubenswrapper[32968]: I0309 16:48:12.863708 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.864189 master-0 kubenswrapper[32968]: I0309 16:48:12.863866 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.864189 master-0 kubenswrapper[32968]: I0309 16:48:12.863903 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.864189 master-0 kubenswrapper[32968]: I0309 16:48:12.863927 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.864189 master-0 kubenswrapper[32968]: I0309 16:48:12.863949 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.864189 master-0 kubenswrapper[32968]: I0309 16:48:12.863984 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.864615 master-0 kubenswrapper[32968]: I0309 16:48:12.864297 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.966575 master-0 kubenswrapper[32968]: I0309 16:48:12.966490 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.966966 master-0 kubenswrapper[32968]: I0309 16:48:12.966643 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.966966 master-0 kubenswrapper[32968]: I0309 16:48:12.966711 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.966966 master-0 kubenswrapper[32968]: I0309 16:48:12.966807 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.967101 master-0 kubenswrapper[32968]: I0309 16:48:12.966979 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.967101 master-0 kubenswrapper[32968]: I0309 16:48:12.967046 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:12.967162 master-0 kubenswrapper[32968]: I0309 16:48:12.967104 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967206 master-0 kubenswrapper[32968]: I0309 16:48:12.967187 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967328 master-0 kubenswrapper[32968]: I0309 16:48:12.967306 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967366 master-0 kubenswrapper[32968]: I0309 16:48:12.967263 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967412 master-0 kubenswrapper[32968]: I0309 16:48:12.967391 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967540 master-0 kubenswrapper[32968]: I0309 16:48:12.967467 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967639 master-0 kubenswrapper[32968]: I0309 16:48:12.967617 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967706 master-0 kubenswrapper[32968]: I0309 16:48:12.967687 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967743 master-0 kubenswrapper[32968]: I0309 16:48:12.967730 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:12.967830 master-0 kubenswrapper[32968]: I0309 16:48:12.967808 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:13.158026 master-0 kubenswrapper[32968]: I0309 16:48:13.157803 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:13.186887 master-0 kubenswrapper[32968]: W0309 16:48:13.186814 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb275ed7e9ce09d69a66613ca3ae3d89e.slice/crio-638f5b55e64f9ed8f2e19f70e3b3f364f9b588e45276527c5fb2608d489874cb WatchSource:0}: Error finding container 638f5b55e64f9ed8f2e19f70e3b3f364f9b588e45276527c5fb2608d489874cb: Status 404 returned error can't find the container with id 638f5b55e64f9ed8f2e19f70e3b3f364f9b588e45276527c5fb2608d489874cb Mar 09 16:48:13.260603 master-0 kubenswrapper[32968]: I0309 16:48:13.260525 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"638f5b55e64f9ed8f2e19f70e3b3f364f9b588e45276527c5fb2608d489874cb"} Mar 09 16:48:13.266242 master-0 kubenswrapper[32968]: I0309 16:48:13.266195 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 09 16:48:13.267355 master-0 kubenswrapper[32968]: I0309 16:48:13.267186 32968 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9" exitCode=0 Mar 09 16:48:13.267355 master-0 kubenswrapper[32968]: I0309 16:48:13.267350 32968 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137" exitCode=0 Mar 09 16:48:13.267355 master-0 kubenswrapper[32968]: I0309 16:48:13.267361 32968 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4" exitCode=0 Mar 09 16:48:13.267593 master-0 kubenswrapper[32968]: I0309 16:48:13.267370 32968 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58" exitCode=2 Mar 09 16:48:13.269059 master-0 kubenswrapper[32968]: I0309 16:48:13.268999 32968 generic.go:334] "Generic (PLEG): container finished" podID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" containerID="6d443b8c7701a1e5519d5ef466702beaf453aec2a3b563de331d4699fd727652" exitCode=0 Mar 09 16:48:13.269059 master-0 kubenswrapper[32968]: I0309 16:48:13.269055 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df","Type":"ContainerDied","Data":"6d443b8c7701a1e5519d5ef466702beaf453aec2a3b563de331d4699fd727652"} Mar 09 16:48:13.271048 master-0 kubenswrapper[32968]: I0309 16:48:13.270982 32968 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:13.271833 master-0 kubenswrapper[32968]: I0309 16:48:13.271776 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.013070 master-0 kubenswrapper[32968]: E0309 16:48:14.012961 32968 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.013537 master-0 kubenswrapper[32968]: E0309 16:48:14.013505 32968 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.014002 master-0 kubenswrapper[32968]: E0309 16:48:14.013967 32968 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.014466 master-0 kubenswrapper[32968]: E0309 16:48:14.014407 32968 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.014957 master-0 kubenswrapper[32968]: E0309 16:48:14.014920 32968 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.014957 master-0 kubenswrapper[32968]: I0309 16:48:14.014953 32968 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 09 16:48:14.015408 master-0 kubenswrapper[32968]: E0309 16:48:14.015371 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 09 16:48:14.090999 master-0 kubenswrapper[32968]: I0309 16:48:14.090887 32968 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.091801 master-0 kubenswrapper[32968]: I0309 16:48:14.091713 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.217602 master-0 kubenswrapper[32968]: E0309 16:48:14.217518 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 09 16:48:14.280668 master-0 kubenswrapper[32968]: I0309 16:48:14.280452 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"8fab2020ef9b38432e3f16fd30963c59fa955a3ba62df68c3c2ea954609a4fb6"} Mar 09 16:48:14.281535 master-0 kubenswrapper[32968]: E0309 16:48:14.281047 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:14.281793 master-0 kubenswrapper[32968]: I0309 16:48:14.281703 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.620349 master-0 kubenswrapper[32968]: E0309 16:48:14.620059 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 09 16:48:14.667324 master-0 kubenswrapper[32968]: I0309 16:48:14.667248 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:48:14.668874 master-0 kubenswrapper[32968]: I0309 16:48:14.668819 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:14.796019 master-0 kubenswrapper[32968]: I0309 16:48:14.795654 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kubelet-dir\") pod \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " Mar 09 16:48:14.796019 master-0 kubenswrapper[32968]: I0309 16:48:14.795800 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kube-api-access\") pod \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " Mar 09 16:48:14.796019 master-0 kubenswrapper[32968]: I0309 16:48:14.795849 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-var-lock\") pod \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\" (UID: \"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df\") " Mar 09 16:48:14.796651 master-0 kubenswrapper[32968]: I0309 16:48:14.796586 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-var-lock" (OuterVolumeSpecName: "var-lock") pod "cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" (UID: "cea05ed1-c8b7-4ed5-ae5a-360bd225c1df"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:48:14.796815 master-0 kubenswrapper[32968]: I0309 16:48:14.796697 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" (UID: "cea05ed1-c8b7-4ed5-ae5a-360bd225c1df"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:48:14.818653 master-0 kubenswrapper[32968]: I0309 16:48:14.808995 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" (UID: "cea05ed1-c8b7-4ed5-ae5a-360bd225c1df"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:48:14.899943 master-0 kubenswrapper[32968]: I0309 16:48:14.899800 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:14.900581 master-0 kubenswrapper[32968]: I0309 16:48:14.900561 32968 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:14.900716 master-0 kubenswrapper[32968]: I0309 16:48:14.900696 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea05ed1-c8b7-4ed5-ae5a-360bd225c1df-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:15.234024 master-0 kubenswrapper[32968]: I0309 16:48:15.233946 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 09 16:48:15.235230 master-0 kubenswrapper[32968]: I0309 16:48:15.235199 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:15.236703 master-0 kubenswrapper[32968]: I0309 16:48:15.236601 32968 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:15.237344 master-0 kubenswrapper[32968]: I0309 16:48:15.237314 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:15.290024 master-0 kubenswrapper[32968]: I0309 16:48:15.289952 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"cea05ed1-c8b7-4ed5-ae5a-360bd225c1df","Type":"ContainerDied","Data":"8d054c705e740247388e519eb180b5a68338ba2f8ff895452346390a63e235d5"} Mar 09 16:48:15.290024 master-0 kubenswrapper[32968]: I0309 16:48:15.290020 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d054c705e740247388e519eb180b5a68338ba2f8ff895452346390a63e235d5" Mar 09 16:48:15.290024 master-0 kubenswrapper[32968]: I0309 16:48:15.289985 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 09 16:48:15.293937 master-0 kubenswrapper[32968]: I0309 16:48:15.293895 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_48512e02022680c9d90092634f0fc146/kube-apiserver-cert-syncer/0.log" Mar 09 16:48:15.294684 master-0 kubenswrapper[32968]: I0309 16:48:15.294647 32968 generic.go:334] "Generic (PLEG): container finished" podID="48512e02022680c9d90092634f0fc146" containerID="87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49" exitCode=0 Mar 09 16:48:15.295748 master-0 kubenswrapper[32968]: I0309 16:48:15.295615 32968 scope.go:117] "RemoveContainer" containerID="dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9" Mar 09 16:48:15.295748 master-0 kubenswrapper[32968]: I0309 16:48:15.295710 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:15.296806 master-0 kubenswrapper[32968]: E0309 16:48:15.296176 32968 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:48:15.309934 master-0 kubenswrapper[32968]: I0309 16:48:15.309772 32968 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:15.310537 master-0 kubenswrapper[32968]: I0309 16:48:15.310491 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:15.316002 master-0 kubenswrapper[32968]: I0309 16:48:15.315973 32968 scope.go:117] "RemoveContainer" containerID="903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137" Mar 09 16:48:15.331074 master-0 kubenswrapper[32968]: I0309 16:48:15.331002 32968 scope.go:117] "RemoveContainer" containerID="4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4" Mar 09 16:48:15.353488 master-0 kubenswrapper[32968]: I0309 16:48:15.353368 32968 scope.go:117] "RemoveContainer" containerID="faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58" Mar 09 16:48:15.374076 master-0 kubenswrapper[32968]: I0309 16:48:15.373948 32968 scope.go:117] "RemoveContainer" containerID="87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49" Mar 09 16:48:15.395310 master-0 kubenswrapper[32968]: I0309 16:48:15.395248 32968 scope.go:117] "RemoveContainer" containerID="00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497" Mar 09 16:48:15.407838 master-0 kubenswrapper[32968]: I0309 16:48:15.407774 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 09 16:48:15.408073 master-0 kubenswrapper[32968]: I0309 16:48:15.407873 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 09 16:48:15.408073 master-0 kubenswrapper[32968]: I0309 16:48:15.407918 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") pod \"48512e02022680c9d90092634f0fc146\" (UID: \"48512e02022680c9d90092634f0fc146\") " Mar 09 16:48:15.408410 master-0 kubenswrapper[32968]: I0309 16:48:15.408215 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:48:15.408410 master-0 kubenswrapper[32968]: I0309 16:48:15.408228 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:48:15.408410 master-0 kubenswrapper[32968]: I0309 16:48:15.408311 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48512e02022680c9d90092634f0fc146" (UID: "48512e02022680c9d90092634f0fc146"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:48:15.417648 master-0 kubenswrapper[32968]: I0309 16:48:15.417455 32968 scope.go:117] "RemoveContainer" containerID="dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9" Mar 09 16:48:15.418670 master-0 kubenswrapper[32968]: E0309 16:48:15.418635 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9\": container with ID starting with dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9 not found: ID does not exist" containerID="dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9" Mar 09 16:48:15.418770 master-0 kubenswrapper[32968]: I0309 16:48:15.418684 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9"} err="failed to get container status \"dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9\": rpc error: code = NotFound desc = could not find container \"dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9\": container with ID starting with dbc8802e9c172eeb6d85c32209958ed71bbd86df94f37df4a00c7b1bb549f3a9 not found: ID does not exist" Mar 09 16:48:15.418770 master-0 kubenswrapper[32968]: I0309 16:48:15.418721 32968 scope.go:117] "RemoveContainer" containerID="903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137" Mar 09 16:48:15.419585 master-0 kubenswrapper[32968]: E0309 16:48:15.419235 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137\": container with ID starting with 903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137 not found: ID does not exist" containerID="903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137" Mar 09 16:48:15.419585 master-0 kubenswrapper[32968]: I0309 16:48:15.419495 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137"} err="failed to get container status \"903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137\": rpc error: code = NotFound desc = could not find container \"903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137\": container with ID starting with 903de6528b7f75d324d90dbbc775ec16fdeee134268545c44f29e6b617107137 not found: ID does not exist" Mar 09 16:48:15.419585 master-0 kubenswrapper[32968]: I0309 16:48:15.419524 32968 scope.go:117] "RemoveContainer" containerID="4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4" Mar 09 16:48:15.419837 master-0 kubenswrapper[32968]: E0309 16:48:15.419804 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4\": container with ID starting with 4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4 not found: ID does not exist" containerID="4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4" Mar 09 16:48:15.419921 master-0 kubenswrapper[32968]: I0309 16:48:15.419841 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4"} err="failed to get container status \"4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4\": rpc error: code = NotFound desc = could not find container \"4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4\": container with ID starting with 4cac4f793347051eab66786a4a2aac5174bf1e56afdccba1b322f7f7454967c4 not found: ID does not exist" Mar 09 16:48:15.419921 master-0 kubenswrapper[32968]: I0309 16:48:15.419862 32968 scope.go:117] "RemoveContainer" containerID="faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58" Mar 09 16:48:15.420187 master-0 kubenswrapper[32968]: E0309 16:48:15.420155 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58\": container with ID starting with faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58 not found: ID does not exist" containerID="faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58" Mar 09 16:48:15.420272 master-0 kubenswrapper[32968]: I0309 16:48:15.420185 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58"} err="failed to get container status \"faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58\": rpc error: code = NotFound desc = could not find container \"faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58\": container with ID starting with faf992dd62a14e7701968421e92328de86caa026e8b85ec5ccd956d9ddc44d58 not found: ID does not exist" Mar 09 16:48:15.420272 master-0 kubenswrapper[32968]: I0309 16:48:15.420203 32968 scope.go:117] "RemoveContainer" containerID="87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49" Mar 09 16:48:15.421120 master-0 kubenswrapper[32968]: E0309 16:48:15.420936 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49\": container with ID starting with 87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49 not found: ID does not exist" containerID="87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49" Mar 09 16:48:15.421120 master-0 kubenswrapper[32968]: I0309 16:48:15.420968 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49"} err="failed to get container status \"87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49\": rpc error: code = NotFound desc = could not find container \"87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49\": container with ID starting with 87c6629f01a7b8ea9c288d4bb7f7a06c913dc209fb58582a80e419d94c5d1a49 not found: ID does not exist" Mar 09 16:48:15.421120 master-0 kubenswrapper[32968]: I0309 16:48:15.420987 32968 scope.go:117] "RemoveContainer" containerID="00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497" Mar 09 16:48:15.421447 master-0 kubenswrapper[32968]: E0309 16:48:15.421307 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497\": container with ID starting with 00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497 not found: ID does not exist" containerID="00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497" Mar 09 16:48:15.421447 master-0 kubenswrapper[32968]: I0309 16:48:15.421343 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497"} err="failed to get container status \"00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497\": rpc error: code = NotFound desc = could not find container \"00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497\": container with ID starting with 00e113a35be9e490b0dc8fc40000d7da09cf35f91076294eee323bc59fc7f497 not found: ID does not exist" Mar 09 16:48:15.421841 master-0 kubenswrapper[32968]: E0309 16:48:15.421796 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 09 16:48:15.504325 master-0 kubenswrapper[32968]: I0309 16:48:15.504146 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:48:15.504325 master-0 kubenswrapper[32968]: I0309 16:48:15.504249 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:48:15.518464 master-0 kubenswrapper[32968]: I0309 16:48:15.513399 32968 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:15.518464 master-0 kubenswrapper[32968]: I0309 16:48:15.513474 32968 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:15.518464 master-0 kubenswrapper[32968]: I0309 16:48:15.513493 32968 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/48512e02022680c9d90092634f0fc146-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:48:15.611897 master-0 kubenswrapper[32968]: I0309 16:48:15.611824 32968 status_manager.go:851] "Failed to get status for pod" podUID="48512e02022680c9d90092634f0fc146" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:15.612498 master-0 kubenswrapper[32968]: I0309 16:48:15.612401 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:16.095067 master-0 kubenswrapper[32968]: I0309 16:48:16.094972 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48512e02022680c9d90092634f0fc146" path="/var/lib/kubelet/pods/48512e02022680c9d90092634f0fc146/volumes" Mar 09 16:48:17.023795 master-0 kubenswrapper[32968]: E0309 16:48:17.023699 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 09 16:48:17.778180 master-0 kubenswrapper[32968]: E0309 16:48:17.777881 32968 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189b3a3885c18399 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:48512e02022680c9d90092634f0fc146,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-09 16:48:12.737053593 +0000 UTC m=+118.840376133,LastTimestamp:2026-03-09 16:48:12.737053593 +0000 UTC m=+118.840376133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 09 16:48:20.225460 master-0 kubenswrapper[32968]: E0309 16:48:20.225334 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 09 16:48:22.074601 master-0 kubenswrapper[32968]: I0309 16:48:22.074530 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:48:22.075133 master-0 kubenswrapper[32968]: I0309 16:48:22.074617 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:48:24.089046 master-0 kubenswrapper[32968]: I0309 16:48:24.088984 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:25.084902 master-0 kubenswrapper[32968]: I0309 16:48:25.084818 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:25.086557 master-0 kubenswrapper[32968]: I0309 16:48:25.086362 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:25.107222 master-0 kubenswrapper[32968]: I0309 16:48:25.107150 32968 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:25.107222 master-0 kubenswrapper[32968]: I0309 16:48:25.107211 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:25.108081 master-0 kubenswrapper[32968]: E0309 16:48:25.108013 32968 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:25.108715 master-0 kubenswrapper[32968]: I0309 16:48:25.108677 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:25.413173 master-0 kubenswrapper[32968]: I0309 16:48:25.413105 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"eb646d2922aff2671db94713825f9ad6d05e5ec578013cae057907c4bd30bb76"} Mar 09 16:48:25.504224 master-0 kubenswrapper[32968]: I0309 16:48:25.504142 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:48:25.504224 master-0 kubenswrapper[32968]: I0309 16:48:25.504210 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:48:26.425927 master-0 kubenswrapper[32968]: I0309 16:48:26.425718 32968 generic.go:334] "Generic (PLEG): container finished" podID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" containerID="3224b5e0ec5de837fdad8daefc219eb030b51ead1ec23a33bc2bfc2f0c3f26ee" exitCode=0 Mar 09 16:48:26.425927 master-0 kubenswrapper[32968]: I0309 16:48:26.425793 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerDied","Data":"3224b5e0ec5de837fdad8daefc219eb030b51ead1ec23a33bc2bfc2f0c3f26ee"} Mar 09 16:48:26.426683 master-0 kubenswrapper[32968]: I0309 16:48:26.426267 32968 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:26.426683 master-0 kubenswrapper[32968]: I0309 16:48:26.426329 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:26.427398 master-0 kubenswrapper[32968]: I0309 16:48:26.427319 32968 status_manager.go:851] "Failed to get status for pod" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 09 16:48:26.427493 master-0 kubenswrapper[32968]: E0309 16:48:26.427390 32968 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:26.627637 master-0 kubenswrapper[32968]: E0309 16:48:26.627527 32968 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 09 16:48:26.995614 master-0 kubenswrapper[32968]: E0309 16:48:26.990584 32968 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee901e15ed65fb7aa5785ec8ec0563e.slice/crio-f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08.scope\": RecentStats: unable to find data in memory cache]" Mar 09 16:48:27.472694 master-0 kubenswrapper[32968]: I0309 16:48:27.472561 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager/0.log" Mar 09 16:48:27.472694 master-0 kubenswrapper[32968]: I0309 16:48:27.472617 32968 generic.go:334] "Generic (PLEG): container finished" podID="4ee901e15ed65fb7aa5785ec8ec0563e" containerID="f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08" exitCode=1 Mar 09 16:48:27.472694 master-0 kubenswrapper[32968]: I0309 16:48:27.472682 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerDied","Data":"f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08"} Mar 09 16:48:27.480886 master-0 kubenswrapper[32968]: I0309 16:48:27.473253 32968 scope.go:117] "RemoveContainer" containerID="f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08" Mar 09 16:48:27.480886 master-0 kubenswrapper[32968]: I0309 16:48:27.477193 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"271365eaf72d31bc93a9e188ed995d95ab323d5342702d02769352ffc89508f7"} Mar 09 16:48:27.480886 master-0 kubenswrapper[32968]: I0309 16:48:27.477218 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"b59cdef9ece87934835b0d9e0af4b2b9081eb08e7a556fda8651586d551a2546"} Mar 09 16:48:27.480886 master-0 kubenswrapper[32968]: I0309 16:48:27.477227 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"bdd34c82e3b3b95e19e3c512c68c883f06f35577f857ead19d223673b979b4c9"} Mar 09 16:48:28.511082 master-0 kubenswrapper[32968]: I0309 16:48:28.511021 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager/0.log" Mar 09 16:48:28.511750 master-0 kubenswrapper[32968]: I0309 16:48:28.511163 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"4ee901e15ed65fb7aa5785ec8ec0563e","Type":"ContainerStarted","Data":"911cba7e1f9cb852c637561f891e3b5a982532d757d88a06ff9aebcbd7c475c2"} Mar 09 16:48:28.537463 master-0 kubenswrapper[32968]: I0309 16:48:28.535150 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"4b6c6eb1f354f0021c96078c38669feb43bb4d8de63d6f61ab675844a3580bec"} Mar 09 16:48:28.537463 master-0 kubenswrapper[32968]: I0309 16:48:28.535215 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"ceedc1b678737e378e3a9ac2786a5330b21e4b383376a6aa5dc78d4e46b9b59a"} Mar 09 16:48:28.537463 master-0 kubenswrapper[32968]: I0309 16:48:28.535623 32968 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:28.537463 master-0 kubenswrapper[32968]: I0309 16:48:28.535648 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:28.537463 master-0 kubenswrapper[32968]: I0309 16:48:28.535940 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:30.109788 master-0 kubenswrapper[32968]: I0309 16:48:30.109719 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:30.109788 master-0 kubenswrapper[32968]: I0309 16:48:30.109789 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:30.118736 master-0 kubenswrapper[32968]: I0309 16:48:30.116190 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:32.045289 master-0 kubenswrapper[32968]: I0309 16:48:32.045142 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:48:32.074435 master-0 kubenswrapper[32968]: I0309 16:48:32.074346 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:48:32.074435 master-0 kubenswrapper[32968]: I0309 16:48:32.074417 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:48:33.734236 master-0 kubenswrapper[32968]: I0309 16:48:33.734152 32968 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:34.129873 master-0 kubenswrapper[32968]: I0309 16:48:34.129604 32968 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="2632b110-5ab1-4211-a147-89739aaa9d97" Mar 09 16:48:34.580446 master-0 kubenswrapper[32968]: I0309 16:48:34.580336 32968 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:34.580446 master-0 kubenswrapper[32968]: I0309 16:48:34.580400 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:34.585230 master-0 kubenswrapper[32968]: I0309 16:48:34.585142 32968 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="2632b110-5ab1-4211-a147-89739aaa9d97" Mar 09 16:48:34.586408 master-0 kubenswrapper[32968]: I0309 16:48:34.586373 32968 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://bdd34c82e3b3b95e19e3c512c68c883f06f35577f857ead19d223673b979b4c9" Mar 09 16:48:34.586408 master-0 kubenswrapper[32968]: I0309 16:48:34.586400 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:34.984517 master-0 kubenswrapper[32968]: I0309 16:48:34.984438 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:48:34.985268 master-0 kubenswrapper[32968]: I0309 16:48:34.984930 32968 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 09 16:48:34.985268 master-0 kubenswrapper[32968]: I0309 16:48:34.985016 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 09 16:48:35.504338 master-0 kubenswrapper[32968]: I0309 16:48:35.504263 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:48:35.504816 master-0 kubenswrapper[32968]: I0309 16:48:35.504366 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:48:35.589278 master-0 kubenswrapper[32968]: I0309 16:48:35.589204 32968 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:35.589787 master-0 kubenswrapper[32968]: I0309 16:48:35.589769 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5703d5d7-efc2-4bdc-b786-05db21cf4be8" Mar 09 16:48:35.593456 master-0 kubenswrapper[32968]: I0309 16:48:35.593355 32968 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="2632b110-5ab1-4211-a147-89739aaa9d97" Mar 09 16:48:42.074723 master-0 kubenswrapper[32968]: I0309 16:48:42.074612 32968 patch_prober.go:28] interesting pod/console-7b698b4fc8-zx5n6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 09 16:48:42.075611 master-0 kubenswrapper[32968]: I0309 16:48:42.074747 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 09 16:48:44.047791 master-0 kubenswrapper[32968]: I0309 16:48:44.047679 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 09 16:48:44.317082 master-0 kubenswrapper[32968]: I0309 16:48:44.316844 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 09 16:48:44.598971 master-0 kubenswrapper[32968]: I0309 16:48:44.598838 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 09 16:48:44.871020 master-0 kubenswrapper[32968]: I0309 16:48:44.870825 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 09 16:48:44.984892 master-0 kubenswrapper[32968]: I0309 16:48:44.984825 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 09 16:48:44.991036 master-0 kubenswrapper[32968]: I0309 16:48:44.990725 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:48:44.999758 master-0 kubenswrapper[32968]: I0309 16:48:44.999646 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:48:45.038475 master-0 kubenswrapper[32968]: I0309 16:48:45.038373 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 09 16:48:45.266130 master-0 kubenswrapper[32968]: I0309 16:48:45.266067 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 09 16:48:45.339670 master-0 kubenswrapper[32968]: I0309 16:48:45.339596 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 09 16:48:45.344828 master-0 kubenswrapper[32968]: I0309 16:48:45.344773 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 09 16:48:45.367182 master-0 kubenswrapper[32968]: I0309 16:48:45.367076 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 09 16:48:45.443940 master-0 kubenswrapper[32968]: I0309 16:48:45.443861 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 09 16:48:45.464243 master-0 kubenswrapper[32968]: I0309 16:48:45.464179 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 09 16:48:45.496635 master-0 kubenswrapper[32968]: I0309 16:48:45.496576 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 09 16:48:45.503805 master-0 kubenswrapper[32968]: I0309 16:48:45.503723 32968 patch_prober.go:28] interesting pod/console-7bd7656797-fzjhw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 09 16:48:45.504119 master-0 kubenswrapper[32968]: I0309 16:48:45.503817 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 09 16:48:45.506312 master-0 kubenswrapper[32968]: I0309 16:48:45.506269 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 09 16:48:45.514961 master-0 kubenswrapper[32968]: I0309 16:48:45.514861 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 09 16:48:45.527562 master-0 kubenswrapper[32968]: I0309 16:48:45.527352 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 09 16:48:45.567745 master-0 kubenswrapper[32968]: I0309 16:48:45.567682 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 09 16:48:45.708263 master-0 kubenswrapper[32968]: I0309 16:48:45.708176 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-ns927" Mar 09 16:48:45.757208 master-0 kubenswrapper[32968]: I0309 16:48:45.757118 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-jtzms" Mar 09 16:48:45.857508 master-0 kubenswrapper[32968]: I0309 16:48:45.857315 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-hgcd7" Mar 09 16:48:45.925644 master-0 kubenswrapper[32968]: I0309 16:48:45.925542 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 09 16:48:46.007371 master-0 kubenswrapper[32968]: I0309 16:48:46.007272 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-nzqc5" Mar 09 16:48:46.055616 master-0 kubenswrapper[32968]: I0309 16:48:46.055531 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x9sm5" Mar 09 16:48:46.094735 master-0 kubenswrapper[32968]: I0309 16:48:46.094637 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 09 16:48:46.138334 master-0 kubenswrapper[32968]: I0309 16:48:46.138158 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 09 16:48:46.145255 master-0 kubenswrapper[32968]: I0309 16:48:46.145206 32968 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 09 16:48:46.201357 master-0 kubenswrapper[32968]: I0309 16:48:46.201290 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 09 16:48:46.297456 master-0 kubenswrapper[32968]: I0309 16:48:46.297351 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 09 16:48:46.323496 master-0 kubenswrapper[32968]: I0309 16:48:46.323010 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 09 16:48:46.323496 master-0 kubenswrapper[32968]: I0309 16:48:46.323089 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 09 16:48:46.371722 master-0 kubenswrapper[32968]: I0309 16:48:46.371645 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-dzbl7" Mar 09 16:48:46.380540 master-0 kubenswrapper[32968]: I0309 16:48:46.380456 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 09 16:48:46.485419 master-0 kubenswrapper[32968]: I0309 16:48:46.485344 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 09 16:48:46.532196 master-0 kubenswrapper[32968]: I0309 16:48:46.532128 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 09 16:48:46.637392 master-0 kubenswrapper[32968]: I0309 16:48:46.637324 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 09 16:48:46.659839 master-0 kubenswrapper[32968]: I0309 16:48:46.659766 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 09 16:48:46.700955 master-0 kubenswrapper[32968]: I0309 16:48:46.700886 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-jtvw2" Mar 09 16:48:46.713803 master-0 kubenswrapper[32968]: I0309 16:48:46.713745 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 09 16:48:46.720072 master-0 kubenswrapper[32968]: I0309 16:48:46.720034 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-wsmcd" Mar 09 16:48:46.903622 master-0 kubenswrapper[32968]: I0309 16:48:46.903408 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 09 16:48:46.913096 master-0 kubenswrapper[32968]: I0309 16:48:46.912760 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 09 16:48:46.947160 master-0 kubenswrapper[32968]: I0309 16:48:46.947103 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 09 16:48:47.002989 master-0 kubenswrapper[32968]: I0309 16:48:47.002910 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 09 16:48:47.012807 master-0 kubenswrapper[32968]: I0309 16:48:47.012746 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:48:47.052165 master-0 kubenswrapper[32968]: I0309 16:48:47.052083 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 09 16:48:47.341762 master-0 kubenswrapper[32968]: I0309 16:48:47.341622 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 09 16:48:47.491118 master-0 kubenswrapper[32968]: I0309 16:48:47.491041 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-vp2pt" Mar 09 16:48:47.568282 master-0 kubenswrapper[32968]: I0309 16:48:47.568155 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 09 16:48:47.597875 master-0 kubenswrapper[32968]: I0309 16:48:47.597669 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 09 16:48:47.635789 master-0 kubenswrapper[32968]: I0309 16:48:47.635713 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 09 16:48:47.699806 master-0 kubenswrapper[32968]: I0309 16:48:47.699702 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 09 16:48:47.748875 master-0 kubenswrapper[32968]: I0309 16:48:47.748791 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 09 16:48:47.775481 master-0 kubenswrapper[32968]: I0309 16:48:47.775395 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 09 16:48:47.832525 master-0 kubenswrapper[32968]: I0309 16:48:47.828760 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 09 16:48:48.014785 master-0 kubenswrapper[32968]: I0309 16:48:48.014707 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 09 16:48:48.032550 master-0 kubenswrapper[32968]: I0309 16:48:48.032479 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 09 16:48:48.033063 master-0 kubenswrapper[32968]: I0309 16:48:48.033004 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 09 16:48:48.034251 master-0 kubenswrapper[32968]: I0309 16:48:48.034210 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 09 16:48:48.156086 master-0 kubenswrapper[32968]: I0309 16:48:48.155998 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:48:48.164270 master-0 kubenswrapper[32968]: I0309 16:48:48.164219 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 09 16:48:48.292785 master-0 kubenswrapper[32968]: I0309 16:48:48.292616 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 09 16:48:48.300809 master-0 kubenswrapper[32968]: I0309 16:48:48.300751 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 09 16:48:48.337536 master-0 kubenswrapper[32968]: I0309 16:48:48.337406 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-d68b9" Mar 09 16:48:48.348949 master-0 kubenswrapper[32968]: I0309 16:48:48.348858 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 09 16:48:48.396915 master-0 kubenswrapper[32968]: I0309 16:48:48.396851 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-7zp7c" Mar 09 16:48:48.413328 master-0 kubenswrapper[32968]: I0309 16:48:48.413246 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 09 16:48:48.535013 master-0 kubenswrapper[32968]: I0309 16:48:48.534959 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 09 16:48:48.536386 master-0 kubenswrapper[32968]: I0309 16:48:48.536344 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 09 16:48:48.558788 master-0 kubenswrapper[32968]: I0309 16:48:48.558290 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 09 16:48:48.586623 master-0 kubenswrapper[32968]: I0309 16:48:48.586548 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 09 16:48:48.614548 master-0 kubenswrapper[32968]: I0309 16:48:48.614452 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 09 16:48:48.615837 master-0 kubenswrapper[32968]: I0309 16:48:48.615787 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 09 16:48:48.645609 master-0 kubenswrapper[32968]: I0309 16:48:48.645529 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 09 16:48:48.732294 master-0 kubenswrapper[32968]: I0309 16:48:48.732218 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 09 16:48:48.745297 master-0 kubenswrapper[32968]: I0309 16:48:48.745218 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 09 16:48:48.790758 master-0 kubenswrapper[32968]: I0309 16:48:48.790694 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 09 16:48:48.796775 master-0 kubenswrapper[32968]: I0309 16:48:48.796703 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-fhhfg" Mar 09 16:48:48.836015 master-0 kubenswrapper[32968]: I0309 16:48:48.835852 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 09 16:48:48.907701 master-0 kubenswrapper[32968]: I0309 16:48:48.907630 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-5rw6v" Mar 09 16:48:48.925656 master-0 kubenswrapper[32968]: I0309 16:48:48.925593 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 09 16:48:48.925999 master-0 kubenswrapper[32968]: I0309 16:48:48.925683 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 09 16:48:48.964758 master-0 kubenswrapper[32968]: I0309 16:48:48.964658 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 09 16:48:48.983175 master-0 kubenswrapper[32968]: I0309 16:48:48.983106 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 09 16:48:48.989957 master-0 kubenswrapper[32968]: I0309 16:48:48.989880 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 09 16:48:49.052772 master-0 kubenswrapper[32968]: I0309 16:48:49.052695 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 09 16:48:49.070728 master-0 kubenswrapper[32968]: I0309 16:48:49.070654 32968 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 09 16:48:49.075563 master-0 kubenswrapper[32968]: I0309 16:48:49.075535 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-8gpxb" Mar 09 16:48:49.106359 master-0 kubenswrapper[32968]: I0309 16:48:49.106163 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 09 16:48:49.129925 master-0 kubenswrapper[32968]: I0309 16:48:49.129847 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 09 16:48:49.147557 master-0 kubenswrapper[32968]: I0309 16:48:49.147468 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 09 16:48:49.260497 master-0 kubenswrapper[32968]: I0309 16:48:49.260374 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 09 16:48:49.273826 master-0 kubenswrapper[32968]: I0309 16:48:49.273753 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 09 16:48:49.311411 master-0 kubenswrapper[32968]: I0309 16:48:49.311341 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 09 16:48:49.313488 master-0 kubenswrapper[32968]: I0309 16:48:49.313439 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 09 16:48:49.461439 master-0 kubenswrapper[32968]: I0309 16:48:49.461332 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 09 16:48:49.506723 master-0 kubenswrapper[32968]: I0309 16:48:49.506680 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 09 16:48:49.568470 master-0 kubenswrapper[32968]: I0309 16:48:49.568384 32968 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 09 16:48:49.593023 master-0 kubenswrapper[32968]: I0309 16:48:49.592950 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 09 16:48:49.624343 master-0 kubenswrapper[32968]: I0309 16:48:49.624283 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 09 16:48:49.625386 master-0 kubenswrapper[32968]: I0309 16:48:49.625368 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 09 16:48:49.638852 master-0 kubenswrapper[32968]: I0309 16:48:49.638756 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 09 16:48:49.657865 master-0 kubenswrapper[32968]: I0309 16:48:49.657811 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:48:49.822211 master-0 kubenswrapper[32968]: I0309 16:48:49.822044 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 09 16:48:49.855201 master-0 kubenswrapper[32968]: I0309 16:48:49.855150 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 09 16:48:50.005681 master-0 kubenswrapper[32968]: I0309 16:48:50.005621 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-g4frj" Mar 09 16:48:50.008933 master-0 kubenswrapper[32968]: I0309 16:48:50.008882 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 09 16:48:50.059310 master-0 kubenswrapper[32968]: I0309 16:48:50.059258 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 09 16:48:50.073149 master-0 kubenswrapper[32968]: I0309 16:48:50.072951 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 09 16:48:50.147454 master-0 kubenswrapper[32968]: I0309 16:48:50.147356 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 09 16:48:50.185326 master-0 kubenswrapper[32968]: I0309 16:48:50.184995 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 09 16:48:50.285988 master-0 kubenswrapper[32968]: I0309 16:48:50.285935 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 09 16:48:50.417549 master-0 kubenswrapper[32968]: I0309 16:48:50.417313 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 09 16:48:50.444735 master-0 kubenswrapper[32968]: I0309 16:48:50.444674 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 09 16:48:50.471878 master-0 kubenswrapper[32968]: I0309 16:48:50.471791 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 09 16:48:50.501887 master-0 kubenswrapper[32968]: I0309 16:48:50.501794 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 09 16:48:50.518120 master-0 kubenswrapper[32968]: I0309 16:48:50.518013 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 09 16:48:50.541783 master-0 kubenswrapper[32968]: I0309 16:48:50.541706 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 09 16:48:50.567345 master-0 kubenswrapper[32968]: I0309 16:48:50.567231 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 09 16:48:50.590084 master-0 kubenswrapper[32968]: I0309 16:48:50.589976 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 09 16:48:50.654077 master-0 kubenswrapper[32968]: I0309 16:48:50.653984 32968 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 09 16:48:50.656845 master-0 kubenswrapper[32968]: I0309 16:48:50.656677 32968 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 09 16:48:50.659305 master-0 kubenswrapper[32968]: I0309 16:48:50.659255 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 09 16:48:50.667206 master-0 kubenswrapper[32968]: I0309 16:48:50.667034 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 09 16:48:50.667206 master-0 kubenswrapper[32968]: I0309 16:48:50.667120 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 09 16:48:50.677523 master-0 kubenswrapper[32968]: I0309 16:48:50.677331 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 09 16:48:50.719877 master-0 kubenswrapper[32968]: I0309 16:48:50.719580 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=17.719549487 podStartE2EDuration="17.719549487s" podCreationTimestamp="2026-03-09 16:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:48:50.691328921 +0000 UTC m=+156.794651481" watchObservedRunningTime="2026-03-09 16:48:50.719549487 +0000 UTC m=+156.822872027" Mar 09 16:48:50.778201 master-0 kubenswrapper[32968]: I0309 16:48:50.778106 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 09 16:48:50.800852 master-0 kubenswrapper[32968]: I0309 16:48:50.800497 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 09 16:48:50.829342 master-0 kubenswrapper[32968]: I0309 16:48:50.829262 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 09 16:48:50.858098 master-0 kubenswrapper[32968]: I0309 16:48:50.858031 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 09 16:48:50.860153 master-0 kubenswrapper[32968]: I0309 16:48:50.860116 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 09 16:48:50.895246 master-0 kubenswrapper[32968]: I0309 16:48:50.895186 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 09 16:48:50.940269 master-0 kubenswrapper[32968]: I0309 16:48:50.940185 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 09 16:48:50.951952 master-0 kubenswrapper[32968]: I0309 16:48:50.951885 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 09 16:48:50.956118 master-0 kubenswrapper[32968]: I0309 16:48:50.956078 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 09 16:48:51.089081 master-0 kubenswrapper[32968]: I0309 16:48:51.088994 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 09 16:48:51.181716 master-0 kubenswrapper[32968]: I0309 16:48:51.181618 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 09 16:48:51.270725 master-0 kubenswrapper[32968]: I0309 16:48:51.270541 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 09 16:48:51.312959 master-0 kubenswrapper[32968]: I0309 16:48:51.312861 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 09 16:48:51.315757 master-0 kubenswrapper[32968]: I0309 16:48:51.315680 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 09 16:48:51.379459 master-0 kubenswrapper[32968]: I0309 16:48:51.379348 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-96tct" Mar 09 16:48:51.427148 master-0 kubenswrapper[32968]: I0309 16:48:51.427075 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 09 16:48:51.523297 master-0 kubenswrapper[32968]: I0309 16:48:51.523053 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 09 16:48:51.554088 master-0 kubenswrapper[32968]: I0309 16:48:51.553989 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:48:51.559583 master-0 kubenswrapper[32968]: I0309 16:48:51.559518 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 09 16:48:51.610292 master-0 kubenswrapper[32968]: I0309 16:48:51.610207 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 09 16:48:51.622398 master-0 kubenswrapper[32968]: I0309 16:48:51.622300 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 09 16:48:51.647514 master-0 kubenswrapper[32968]: I0309 16:48:51.647414 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 09 16:48:51.711662 master-0 kubenswrapper[32968]: I0309 16:48:51.711580 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 09 16:48:51.747548 master-0 kubenswrapper[32968]: I0309 16:48:51.747467 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 09 16:48:51.781508 master-0 kubenswrapper[32968]: I0309 16:48:51.781353 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 09 16:48:51.789324 master-0 kubenswrapper[32968]: I0309 16:48:51.789235 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 09 16:48:51.804537 master-0 kubenswrapper[32968]: I0309 16:48:51.804473 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-kz284" Mar 09 16:48:51.893822 master-0 kubenswrapper[32968]: I0309 16:48:51.893764 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 09 16:48:51.967166 master-0 kubenswrapper[32968]: I0309 16:48:51.967093 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 09 16:48:51.995774 master-0 kubenswrapper[32968]: I0309 16:48:51.995692 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 09 16:48:52.016251 master-0 kubenswrapper[32968]: I0309 16:48:52.016176 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 09 16:48:52.053533 master-0 kubenswrapper[32968]: I0309 16:48:52.053295 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 09 16:48:52.080195 master-0 kubenswrapper[32968]: I0309 16:48:52.080101 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:48:52.093867 master-0 kubenswrapper[32968]: I0309 16:48:52.093773 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:48:52.121955 master-0 kubenswrapper[32968]: I0309 16:48:52.121878 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 09 16:48:52.156706 master-0 kubenswrapper[32968]: I0309 16:48:52.156615 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 09 16:48:52.198894 master-0 kubenswrapper[32968]: I0309 16:48:52.198818 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5k05m0jd20f8o" Mar 09 16:48:52.211449 master-0 kubenswrapper[32968]: I0309 16:48:52.211369 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 09 16:48:52.279720 master-0 kubenswrapper[32968]: I0309 16:48:52.279637 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-gkx8f" Mar 09 16:48:52.355605 master-0 kubenswrapper[32968]: I0309 16:48:52.355318 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 09 16:48:52.409782 master-0 kubenswrapper[32968]: I0309 16:48:52.409701 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 09 16:48:52.568546 master-0 kubenswrapper[32968]: I0309 16:48:52.568470 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 09 16:48:52.762998 master-0 kubenswrapper[32968]: I0309 16:48:52.762918 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-n686v" Mar 09 16:48:52.763771 master-0 kubenswrapper[32968]: I0309 16:48:52.763728 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 09 16:48:52.768472 master-0 kubenswrapper[32968]: I0309 16:48:52.768417 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 09 16:48:52.812886 master-0 kubenswrapper[32968]: I0309 16:48:52.810881 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-mwccd" Mar 09 16:48:52.895546 master-0 kubenswrapper[32968]: I0309 16:48:52.895468 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 09 16:48:52.915718 master-0 kubenswrapper[32968]: I0309 16:48:52.915613 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 09 16:48:52.931978 master-0 kubenswrapper[32968]: I0309 16:48:52.931904 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 09 16:48:52.997210 master-0 kubenswrapper[32968]: I0309 16:48:52.997130 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 09 16:48:53.051589 master-0 kubenswrapper[32968]: I0309 16:48:53.051452 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 09 16:48:53.063800 master-0 kubenswrapper[32968]: I0309 16:48:53.063735 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 09 16:48:53.082615 master-0 kubenswrapper[32968]: I0309 16:48:53.082540 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 09 16:48:53.094470 master-0 kubenswrapper[32968]: I0309 16:48:53.094390 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 09 16:48:53.121945 master-0 kubenswrapper[32968]: I0309 16:48:53.121858 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 09 16:48:53.215272 master-0 kubenswrapper[32968]: I0309 16:48:53.215193 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 09 16:48:53.223245 master-0 kubenswrapper[32968]: I0309 16:48:53.223166 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 09 16:48:53.249045 master-0 kubenswrapper[32968]: I0309 16:48:53.248973 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 09 16:48:53.261335 master-0 kubenswrapper[32968]: I0309 16:48:53.261260 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 09 16:48:53.271032 master-0 kubenswrapper[32968]: I0309 16:48:53.270990 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 09 16:48:53.316522 master-0 kubenswrapper[32968]: I0309 16:48:53.316329 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-rpbqt" Mar 09 16:48:53.333363 master-0 kubenswrapper[32968]: I0309 16:48:53.333280 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-q2k6n" Mar 09 16:48:53.373099 master-0 kubenswrapper[32968]: I0309 16:48:53.372979 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 09 16:48:53.428652 master-0 kubenswrapper[32968]: I0309 16:48:53.428568 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 09 16:48:53.604965 master-0 kubenswrapper[32968]: I0309 16:48:53.604229 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-chm9n" Mar 09 16:48:53.687407 master-0 kubenswrapper[32968]: I0309 16:48:53.687263 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9j6gd" Mar 09 16:48:53.706941 master-0 kubenswrapper[32968]: I0309 16:48:53.706709 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 09 16:48:53.781756 master-0 kubenswrapper[32968]: I0309 16:48:53.781675 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 09 16:48:53.830612 master-0 kubenswrapper[32968]: I0309 16:48:53.830556 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 09 16:48:53.865863 master-0 kubenswrapper[32968]: I0309 16:48:53.865734 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-n45mc" Mar 09 16:48:53.878949 master-0 kubenswrapper[32968]: I0309 16:48:53.878871 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 09 16:48:53.965963 master-0 kubenswrapper[32968]: I0309 16:48:53.965788 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-58glv" Mar 09 16:48:54.033789 master-0 kubenswrapper[32968]: I0309 16:48:54.033678 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 09 16:48:54.055545 master-0 kubenswrapper[32968]: I0309 16:48:54.055481 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 09 16:48:54.121390 master-0 kubenswrapper[32968]: I0309 16:48:54.121170 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 09 16:48:54.135581 master-0 kubenswrapper[32968]: I0309 16:48:54.135509 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 09 16:48:54.184088 master-0 kubenswrapper[32968]: I0309 16:48:54.183997 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 09 16:48:54.212560 master-0 kubenswrapper[32968]: I0309 16:48:54.212483 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 09 16:48:54.273932 master-0 kubenswrapper[32968]: I0309 16:48:54.273698 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-vmh5g" Mar 09 16:48:54.366570 master-0 kubenswrapper[32968]: I0309 16:48:54.366486 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 09 16:48:54.407032 master-0 kubenswrapper[32968]: I0309 16:48:54.406843 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-8bw78" Mar 09 16:48:54.408171 master-0 kubenswrapper[32968]: I0309 16:48:54.408106 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 09 16:48:54.482595 master-0 kubenswrapper[32968]: I0309 16:48:54.482522 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 09 16:48:54.499670 master-0 kubenswrapper[32968]: I0309 16:48:54.499587 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 09 16:48:54.616998 master-0 kubenswrapper[32968]: I0309 16:48:54.616885 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 09 16:48:54.663118 master-0 kubenswrapper[32968]: I0309 16:48:54.662905 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 09 16:48:54.713093 master-0 kubenswrapper[32968]: I0309 16:48:54.713012 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 09 16:48:54.746127 master-0 kubenswrapper[32968]: I0309 16:48:54.746037 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 09 16:48:54.758965 master-0 kubenswrapper[32968]: I0309 16:48:54.758899 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 09 16:48:54.799181 master-0 kubenswrapper[32968]: I0309 16:48:54.799095 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 09 16:48:54.824379 master-0 kubenswrapper[32968]: I0309 16:48:54.824293 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-gpmvf" Mar 09 16:48:54.829013 master-0 kubenswrapper[32968]: I0309 16:48:54.828937 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-ccfvc" Mar 09 16:48:54.899333 master-0 kubenswrapper[32968]: I0309 16:48:54.899239 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 09 16:48:54.923407 master-0 kubenswrapper[32968]: I0309 16:48:54.923274 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 09 16:48:54.957116 master-0 kubenswrapper[32968]: I0309 16:48:54.957072 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 09 16:48:54.960006 master-0 kubenswrapper[32968]: I0309 16:48:54.959952 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 09 16:48:54.985807 master-0 kubenswrapper[32968]: I0309 16:48:54.985732 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-ts66c" Mar 09 16:48:54.996273 master-0 kubenswrapper[32968]: I0309 16:48:54.996190 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 09 16:48:55.053390 master-0 kubenswrapper[32968]: I0309 16:48:55.052346 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 09 16:48:55.173963 master-0 kubenswrapper[32968]: I0309 16:48:55.173787 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-2l9mk" Mar 09 16:48:55.177066 master-0 kubenswrapper[32968]: I0309 16:48:55.177017 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 09 16:48:55.195910 master-0 kubenswrapper[32968]: I0309 16:48:55.195821 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 09 16:48:55.301687 master-0 kubenswrapper[32968]: I0309 16:48:55.301579 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-db2vj" Mar 09 16:48:55.326821 master-0 kubenswrapper[32968]: I0309 16:48:55.326717 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 09 16:48:55.359867 master-0 kubenswrapper[32968]: I0309 16:48:55.359779 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 09 16:48:55.467318 master-0 kubenswrapper[32968]: I0309 16:48:55.467222 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 09 16:48:55.476630 master-0 kubenswrapper[32968]: I0309 16:48:55.476552 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 09 16:48:55.511836 master-0 kubenswrapper[32968]: I0309 16:48:55.511750 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:48:55.513384 master-0 kubenswrapper[32968]: I0309 16:48:55.513328 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 09 16:48:55.516957 master-0 kubenswrapper[32968]: I0309 16:48:55.516899 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:48:55.533284 master-0 kubenswrapper[32968]: I0309 16:48:55.533184 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 09 16:48:55.533702 master-0 kubenswrapper[32968]: I0309 16:48:55.533298 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 09 16:48:55.541478 master-0 kubenswrapper[32968]: I0309 16:48:55.541379 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-h7zpd" Mar 09 16:48:55.550537 master-0 kubenswrapper[32968]: I0309 16:48:55.550404 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 09 16:48:55.649073 master-0 kubenswrapper[32968]: I0309 16:48:55.648973 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 09 16:48:55.664486 master-0 kubenswrapper[32968]: I0309 16:48:55.664402 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 09 16:48:55.673142 master-0 kubenswrapper[32968]: I0309 16:48:55.673079 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 09 16:48:55.796792 master-0 kubenswrapper[32968]: I0309 16:48:55.796587 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 09 16:48:55.822080 master-0 kubenswrapper[32968]: I0309 16:48:55.822005 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 09 16:48:55.824706 master-0 kubenswrapper[32968]: I0309 16:48:55.824583 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 09 16:48:55.969339 master-0 kubenswrapper[32968]: I0309 16:48:55.969229 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 09 16:48:56.049051 master-0 kubenswrapper[32968]: I0309 16:48:56.048860 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 09 16:48:56.101379 master-0 kubenswrapper[32968]: I0309 16:48:56.101308 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 09 16:48:56.160613 master-0 kubenswrapper[32968]: I0309 16:48:56.159015 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 09 16:48:56.244088 master-0 kubenswrapper[32968]: I0309 16:48:56.243997 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 09 16:48:56.244477 master-0 kubenswrapper[32968]: I0309 16:48:56.244409 32968 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 09 16:48:56.244817 master-0 kubenswrapper[32968]: I0309 16:48:56.244761 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" containerID="cri-o://8fab2020ef9b38432e3f16fd30963c59fa955a3ba62df68c3c2ea954609a4fb6" gracePeriod=5 Mar 09 16:48:56.258604 master-0 kubenswrapper[32968]: I0309 16:48:56.258523 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 09 16:48:56.318155 master-0 kubenswrapper[32968]: I0309 16:48:56.317950 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 09 16:48:56.323355 master-0 kubenswrapper[32968]: I0309 16:48:56.323296 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 09 16:48:56.514571 master-0 kubenswrapper[32968]: I0309 16:48:56.514449 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 09 16:48:56.563844 master-0 kubenswrapper[32968]: I0309 16:48:56.563741 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 09 16:48:56.568601 master-0 kubenswrapper[32968]: I0309 16:48:56.568490 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 09 16:48:56.576151 master-0 kubenswrapper[32968]: I0309 16:48:56.576093 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 09 16:48:56.682823 master-0 kubenswrapper[32968]: I0309 16:48:56.682732 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 09 16:48:56.759127 master-0 kubenswrapper[32968]: I0309 16:48:56.759045 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 09 16:48:56.808703 master-0 kubenswrapper[32968]: I0309 16:48:56.808609 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 09 16:48:56.865365 master-0 kubenswrapper[32968]: I0309 16:48:56.865215 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 09 16:48:56.867611 master-0 kubenswrapper[32968]: I0309 16:48:56.867573 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 09 16:48:56.940214 master-0 kubenswrapper[32968]: I0309 16:48:56.940088 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 09 16:48:56.975360 master-0 kubenswrapper[32968]: I0309 16:48:56.975253 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 09 16:48:57.053925 master-0 kubenswrapper[32968]: I0309 16:48:57.053793 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-dzv2s" Mar 09 16:48:57.084790 master-0 kubenswrapper[32968]: I0309 16:48:57.084699 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 09 16:48:57.096916 master-0 kubenswrapper[32968]: I0309 16:48:57.096850 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 09 16:48:57.296800 master-0 kubenswrapper[32968]: I0309 16:48:57.296736 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 09 16:48:57.320859 master-0 kubenswrapper[32968]: I0309 16:48:57.320777 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-4n2zt" Mar 09 16:48:57.489595 master-0 kubenswrapper[32968]: I0309 16:48:57.489516 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 09 16:48:57.533221 master-0 kubenswrapper[32968]: I0309 16:48:57.533171 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 09 16:48:57.628526 master-0 kubenswrapper[32968]: I0309 16:48:57.628282 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 09 16:48:57.656549 master-0 kubenswrapper[32968]: I0309 16:48:57.656476 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 09 16:48:57.994684 master-0 kubenswrapper[32968]: I0309 16:48:57.994291 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 09 16:48:58.187521 master-0 kubenswrapper[32968]: I0309 16:48:58.187442 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 09 16:48:58.325647 master-0 kubenswrapper[32968]: I0309 16:48:58.325496 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 09 16:48:58.354023 master-0 kubenswrapper[32968]: I0309 16:48:58.353953 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 09 16:48:58.376244 master-0 kubenswrapper[32968]: I0309 16:48:58.376169 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqzqk" Mar 09 16:48:58.407007 master-0 kubenswrapper[32968]: I0309 16:48:58.406937 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 09 16:48:58.411417 master-0 kubenswrapper[32968]: I0309 16:48:58.411335 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 09 16:48:58.411777 master-0 kubenswrapper[32968]: I0309 16:48:58.411650 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-xhwgr" Mar 09 16:48:58.529736 master-0 kubenswrapper[32968]: I0309 16:48:58.529656 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 09 16:48:58.542918 master-0 kubenswrapper[32968]: I0309 16:48:58.542057 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b698b4fc8-zx5n6"] Mar 09 16:48:58.664682 master-0 kubenswrapper[32968]: I0309 16:48:58.664509 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 09 16:48:58.692667 master-0 kubenswrapper[32968]: I0309 16:48:58.692489 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 09 16:48:58.926685 master-0 kubenswrapper[32968]: I0309 16:48:58.926521 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 09 16:48:58.950793 master-0 kubenswrapper[32968]: I0309 16:48:58.950718 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 09 16:48:58.953125 master-0 kubenswrapper[32968]: I0309 16:48:58.953087 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 09 16:48:58.978727 master-0 kubenswrapper[32968]: I0309 16:48:58.978644 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 09 16:48:58.988305 master-0 kubenswrapper[32968]: I0309 16:48:58.988247 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 09 16:48:59.115206 master-0 kubenswrapper[32968]: I0309 16:48:59.115137 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-69c4t" Mar 09 16:48:59.174452 master-0 kubenswrapper[32968]: I0309 16:48:59.174363 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 09 16:48:59.181666 master-0 kubenswrapper[32968]: I0309 16:48:59.181616 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 09 16:48:59.249086 master-0 kubenswrapper[32968]: I0309 16:48:59.249014 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 09 16:48:59.331047 master-0 kubenswrapper[32968]: I0309 16:48:59.330962 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 09 16:48:59.562539 master-0 kubenswrapper[32968]: I0309 16:48:59.562333 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 09 16:48:59.628776 master-0 kubenswrapper[32968]: I0309 16:48:59.628688 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 09 16:48:59.705605 master-0 kubenswrapper[32968]: I0309 16:48:59.705544 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 09 16:48:59.939542 master-0 kubenswrapper[32968]: I0309 16:48:59.939480 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 09 16:49:00.036076 master-0 kubenswrapper[32968]: I0309 16:49:00.036004 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 09 16:49:00.038474 master-0 kubenswrapper[32968]: I0309 16:49:00.038409 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 09 16:49:00.348304 master-0 kubenswrapper[32968]: I0309 16:49:00.348137 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 09 16:49:00.439605 master-0 kubenswrapper[32968]: I0309 16:49:00.439536 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 09 16:49:00.568313 master-0 kubenswrapper[32968]: I0309 16:49:00.568232 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 09 16:49:01.142163 master-0 kubenswrapper[32968]: I0309 16:49:01.142109 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 09 16:49:01.159036 master-0 kubenswrapper[32968]: I0309 16:49:01.158981 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 09 16:49:01.331255 master-0 kubenswrapper[32968]: I0309 16:49:01.331180 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 09 16:49:01.480328 master-0 kubenswrapper[32968]: I0309 16:49:01.480257 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-jfns5" Mar 09 16:49:01.821185 master-0 kubenswrapper[32968]: I0309 16:49:01.821120 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 09 16:49:01.821538 master-0 kubenswrapper[32968]: I0309 16:49:01.821191 32968 generic.go:334] "Generic (PLEG): container finished" podID="b275ed7e9ce09d69a66613ca3ae3d89e" containerID="8fab2020ef9b38432e3f16fd30963c59fa955a3ba62df68c3c2ea954609a4fb6" exitCode=137 Mar 09 16:49:01.821538 master-0 kubenswrapper[32968]: I0309 16:49:01.821257 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638f5b55e64f9ed8f2e19f70e3b3f364f9b588e45276527c5fb2608d489874cb" Mar 09 16:49:01.837613 master-0 kubenswrapper[32968]: I0309 16:49:01.837548 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 09 16:49:01.837953 master-0 kubenswrapper[32968]: I0309 16:49:01.837676 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:49:01.930242 master-0 kubenswrapper[32968]: I0309 16:49:01.930131 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 09 16:49:01.930611 master-0 kubenswrapper[32968]: I0309 16:49:01.930295 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 09 16:49:01.930611 master-0 kubenswrapper[32968]: I0309 16:49:01.930480 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 09 16:49:01.930611 master-0 kubenswrapper[32968]: I0309 16:49:01.930507 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 09 16:49:01.930611 master-0 kubenswrapper[32968]: I0309 16:49:01.930534 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log" (OuterVolumeSpecName: "var-log") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:49:01.930753 master-0 kubenswrapper[32968]: I0309 16:49:01.930620 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 09 16:49:01.930753 master-0 kubenswrapper[32968]: I0309 16:49:01.930666 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:49:01.931089 master-0 kubenswrapper[32968]: I0309 16:49:01.931057 32968 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:01.931089 master-0 kubenswrapper[32968]: I0309 16:49:01.931085 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:01.931243 master-0 kubenswrapper[32968]: I0309 16:49:01.931211 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:49:01.931292 master-0 kubenswrapper[32968]: I0309 16:49:01.931211 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests" (OuterVolumeSpecName: "manifests") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:49:01.937094 master-0 kubenswrapper[32968]: I0309 16:49:01.937026 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:49:02.000345 master-0 kubenswrapper[32968]: I0309 16:49:02.000248 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 09 16:49:02.033563 master-0 kubenswrapper[32968]: I0309 16:49:02.033313 32968 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:02.033563 master-0 kubenswrapper[32968]: I0309 16:49:02.033389 32968 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:02.033563 master-0 kubenswrapper[32968]: I0309 16:49:02.033404 32968 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:02.095754 master-0 kubenswrapper[32968]: I0309 16:49:02.095662 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" path="/var/lib/kubelet/pods/b275ed7e9ce09d69a66613ca3ae3d89e/volumes" Mar 09 16:49:02.828117 master-0 kubenswrapper[32968]: I0309 16:49:02.828000 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 09 16:49:23.600601 master-0 kubenswrapper[32968]: I0309 16:49:23.600494 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7b698b4fc8-zx5n6" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" containerID="cri-o://b01d0c869ea1d167a340151a19787789a61a64c5fbb67d3bf03f6f87127e32ac" gracePeriod=15 Mar 09 16:49:24.002989 master-0 kubenswrapper[32968]: I0309 16:49:24.002583 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b698b4fc8-zx5n6_8fda5a84-b685-4333-858b-33123158c1e6/console/0.log" Mar 09 16:49:24.002989 master-0 kubenswrapper[32968]: I0309 16:49:24.002724 32968 generic.go:334] "Generic (PLEG): container finished" podID="8fda5a84-b685-4333-858b-33123158c1e6" containerID="b01d0c869ea1d167a340151a19787789a61a64c5fbb67d3bf03f6f87127e32ac" exitCode=2 Mar 09 16:49:24.002989 master-0 kubenswrapper[32968]: I0309 16:49:24.002781 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b698b4fc8-zx5n6" event={"ID":"8fda5a84-b685-4333-858b-33123158c1e6","Type":"ContainerDied","Data":"b01d0c869ea1d167a340151a19787789a61a64c5fbb67d3bf03f6f87127e32ac"} Mar 09 16:49:24.188837 master-0 kubenswrapper[32968]: I0309 16:49:24.188786 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7b698b4fc8-zx5n6_8fda5a84-b685-4333-858b-33123158c1e6/console/0.log" Mar 09 16:49:24.189306 master-0 kubenswrapper[32968]: I0309 16:49:24.189291 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:49:24.226894 master-0 kubenswrapper[32968]: I0309 16:49:24.226703 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-trusted-ca-bundle\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.226894 master-0 kubenswrapper[32968]: I0309 16:49:24.226833 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpt7z\" (UniqueName: \"kubernetes.io/projected/8fda5a84-b685-4333-858b-33123158c1e6-kube-api-access-gpt7z\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.227486 master-0 kubenswrapper[32968]: I0309 16:49:24.227404 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-console-config\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.227614 master-0 kubenswrapper[32968]: I0309 16:49:24.227597 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-service-ca\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.227811 master-0 kubenswrapper[32968]: I0309 16:49:24.227791 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-oauth-config\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.227915 master-0 kubenswrapper[32968]: I0309 16:49:24.227900 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-oauth-serving-cert\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.228111 master-0 kubenswrapper[32968]: I0309 16:49:24.228098 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-serving-cert\") pod \"8fda5a84-b685-4333-858b-33123158c1e6\" (UID: \"8fda5a84-b685-4333-858b-33123158c1e6\") " Mar 09 16:49:24.228283 master-0 kubenswrapper[32968]: I0309 16:49:24.227771 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:49:24.228340 master-0 kubenswrapper[32968]: I0309 16:49:24.227846 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-console-config" (OuterVolumeSpecName: "console-config") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:49:24.228340 master-0 kubenswrapper[32968]: I0309 16:49:24.228227 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-service-ca" (OuterVolumeSpecName: "service-ca") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:49:24.228632 master-0 kubenswrapper[32968]: I0309 16:49:24.228586 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:49:24.229107 master-0 kubenswrapper[32968]: I0309 16:49:24.229086 32968 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:24.229190 master-0 kubenswrapper[32968]: I0309 16:49:24.229178 32968 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-console-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:24.229267 master-0 kubenswrapper[32968]: I0309 16:49:24.229256 32968 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:24.229340 master-0 kubenswrapper[32968]: I0309 16:49:24.229329 32968 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8fda5a84-b685-4333-858b-33123158c1e6-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:24.230869 master-0 kubenswrapper[32968]: I0309 16:49:24.230831 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:49:24.230956 master-0 kubenswrapper[32968]: I0309 16:49:24.230919 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fda5a84-b685-4333-858b-33123158c1e6-kube-api-access-gpt7z" (OuterVolumeSpecName: "kube-api-access-gpt7z") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "kube-api-access-gpt7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:49:24.231592 master-0 kubenswrapper[32968]: I0309 16:49:24.231551 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8fda5a84-b685-4333-858b-33123158c1e6" (UID: "8fda5a84-b685-4333-858b-33123158c1e6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:49:24.331694 master-0 kubenswrapper[32968]: I0309 16:49:24.331565 32968 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:24.331694 master-0 kubenswrapper[32968]: I0309 16:49:24.331655 32968 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fda5a84-b685-4333-858b-33123158c1e6-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:24.331694 master-0 kubenswrapper[32968]: I0309 16:49:24.331669 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpt7z\" (UniqueName: \"kubernetes.io/projected/8fda5a84-b685-4333-858b-33123158c1e6-kube-api-access-gpt7z\") on node \"master-0\" DevicePath \"\"" Mar 09 16:49:25.014916 master-0 kubenswrapper[32968]: I0309 16:49:25.014831 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b698b4fc8-zx5n6" event={"ID":"8fda5a84-b685-4333-858b-33123158c1e6","Type":"ContainerDied","Data":"b6c6db84395133bf9572a48bcc8e86fb8c8257d441f91abdc7d58960c1c686cf"} Mar 09 16:49:25.014916 master-0 kubenswrapper[32968]: I0309 16:49:25.014921 32968 scope.go:117] "RemoveContainer" containerID="b01d0c869ea1d167a340151a19787789a61a64c5fbb67d3bf03f6f87127e32ac" Mar 09 16:49:25.015996 master-0 kubenswrapper[32968]: I0309 16:49:25.015043 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b698b4fc8-zx5n6" Mar 09 16:49:25.073100 master-0 kubenswrapper[32968]: I0309 16:49:25.073022 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7b698b4fc8-zx5n6"] Mar 09 16:49:25.079535 master-0 kubenswrapper[32968]: I0309 16:49:25.079459 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7b698b4fc8-zx5n6"] Mar 09 16:49:26.095565 master-0 kubenswrapper[32968]: I0309 16:49:26.095470 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fda5a84-b685-4333-858b-33123158c1e6" path="/var/lib/kubelet/pods/8fda5a84-b685-4333-858b-33123158c1e6/volumes" Mar 09 16:50:02.815757 master-0 kubenswrapper[32968]: I0309 16:50:02.815672 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: E0309 16:50:02.816139 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" containerName="installer" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: I0309 16:50:02.816163 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" containerName="installer" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: E0309 16:50:02.816187 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: I0309 16:50:02.816196 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: E0309 16:50:02.816223 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: I0309 16:50:02.816237 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: I0309 16:50:02.816527 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fda5a84-b685-4333-858b-33123158c1e6" containerName="console" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: I0309 16:50:02.816556 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea05ed1-c8b7-4ed5-ae5a-360bd225c1df" containerName="installer" Mar 09 16:50:02.816630 master-0 kubenswrapper[32968]: I0309 16:50:02.816569 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 09 16:50:02.824783 master-0 kubenswrapper[32968]: I0309 16:50:02.824699 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.827908 master-0 kubenswrapper[32968]: I0309 16:50:02.827848 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 09 16:50:02.828762 master-0 kubenswrapper[32968]: I0309 16:50:02.828702 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 09 16:50:02.829116 master-0 kubenswrapper[32968]: I0309 16:50:02.829086 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 09 16:50:02.829296 master-0 kubenswrapper[32968]: I0309 16:50:02.829267 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 09 16:50:02.833302 master-0 kubenswrapper[32968]: I0309 16:50:02.833230 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 09 16:50:02.833602 master-0 kubenswrapper[32968]: I0309 16:50:02.833331 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 09 16:50:02.833941 master-0 kubenswrapper[32968]: I0309 16:50:02.833913 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 09 16:50:02.840753 master-0 kubenswrapper[32968]: I0309 16:50:02.840699 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 09 16:50:02.900237 master-0 kubenswrapper[32968]: I0309 16:50:02.900145 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 09 16:50:02.957839 master-0 kubenswrapper[32968]: I0309 16:50:02.957749 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958300 master-0 kubenswrapper[32968]: I0309 16:50:02.958284 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-web-config\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958441 master-0 kubenswrapper[32968]: I0309 16:50:02.958401 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958536 master-0 kubenswrapper[32968]: I0309 16:50:02.958523 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958628 master-0 kubenswrapper[32968]: I0309 16:50:02.958612 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958738 master-0 kubenswrapper[32968]: I0309 16:50:02.958722 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsg2k\" (UniqueName: \"kubernetes.io/projected/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-kube-api-access-qsg2k\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958822 master-0 kubenswrapper[32968]: I0309 16:50:02.958809 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.958921 master-0 kubenswrapper[32968]: I0309 16:50:02.958908 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.959104 master-0 kubenswrapper[32968]: I0309 16:50:02.958997 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-config-out\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.959344 master-0 kubenswrapper[32968]: I0309 16:50:02.959325 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-config-volume\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.959476 master-0 kubenswrapper[32968]: I0309 16:50:02.959458 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:02.968657 master-0 kubenswrapper[32968]: I0309 16:50:02.959562 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.070850 master-0 kubenswrapper[32968]: I0309 16:50:03.070642 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.070850 master-0 kubenswrapper[32968]: I0309 16:50:03.070739 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.071306 master-0 kubenswrapper[32968]: I0309 16:50:03.070925 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.071306 master-0 kubenswrapper[32968]: I0309 16:50:03.071219 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsg2k\" (UniqueName: \"kubernetes.io/projected/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-kube-api-access-qsg2k\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.071395 master-0 kubenswrapper[32968]: I0309 16:50:03.071328 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.072807 master-0 kubenswrapper[32968]: I0309 16:50:03.071445 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.072807 master-0 kubenswrapper[32968]: I0309 16:50:03.071967 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.072807 master-0 kubenswrapper[32968]: I0309 16:50:03.072733 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.074180 master-0 kubenswrapper[32968]: I0309 16:50:03.074085 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.075052 master-0 kubenswrapper[32968]: I0309 16:50:03.074211 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-config-out\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.075505 master-0 kubenswrapper[32968]: I0309 16:50:03.075476 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-config-volume\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.075593 master-0 kubenswrapper[32968]: I0309 16:50:03.075545 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.075645 master-0 kubenswrapper[32968]: I0309 16:50:03.075598 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.076338 master-0 kubenswrapper[32968]: I0309 16:50:03.075688 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.076338 master-0 kubenswrapper[32968]: I0309 16:50:03.075796 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-web-config\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.090364 master-0 kubenswrapper[32968]: I0309 16:50:03.090295 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.090364 master-0 kubenswrapper[32968]: I0309 16:50:03.090442 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-tls-assets\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.090903 master-0 kubenswrapper[32968]: I0309 16:50:03.090720 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.090903 master-0 kubenswrapper[32968]: I0309 16:50:03.090766 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-config-volume\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.094008 master-0 kubenswrapper[32968]: I0309 16:50:03.091093 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-web-config\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.094008 master-0 kubenswrapper[32968]: I0309 16:50:03.092122 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.094357 master-0 kubenswrapper[32968]: I0309 16:50:03.094192 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-config-out\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.097448 master-0 kubenswrapper[32968]: I0309 16:50:03.094487 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.118315 master-0 kubenswrapper[32968]: I0309 16:50:03.117844 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsg2k\" (UniqueName: \"kubernetes.io/projected/264ccb81-c8bb-4d2e-84e6-4d0d689f30f7-kube-api-access-qsg2k\") pod \"alertmanager-main-0\" (UID: \"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7\") " pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.150137 master-0 kubenswrapper[32968]: I0309 16:50:03.150048 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 09 16:50:03.719327 master-0 kubenswrapper[32968]: I0309 16:50:03.719239 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 09 16:50:03.799180 master-0 kubenswrapper[32968]: I0309 16:50:03.797924 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg"] Mar 09 16:50:03.812865 master-0 kubenswrapper[32968]: I0309 16:50:03.811808 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:03.822009 master-0 kubenswrapper[32968]: I0309 16:50:03.821940 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 09 16:50:03.822875 master-0 kubenswrapper[32968]: I0309 16:50:03.822267 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 09 16:50:03.822875 master-0 kubenswrapper[32968]: I0309 16:50:03.822564 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-e7i40vj9v8nmv" Mar 09 16:50:03.822875 master-0 kubenswrapper[32968]: I0309 16:50:03.822617 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 09 16:50:03.826923 master-0 kubenswrapper[32968]: I0309 16:50:03.826882 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 09 16:50:03.827297 master-0 kubenswrapper[32968]: I0309 16:50:03.827033 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 09 16:50:03.830841 master-0 kubenswrapper[32968]: I0309 16:50:03.830758 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg"] Mar 09 16:50:04.004772 master-0 kubenswrapper[32968]: I0309 16:50:04.004690 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005139 master-0 kubenswrapper[32968]: I0309 16:50:04.004793 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005139 master-0 kubenswrapper[32968]: I0309 16:50:04.004831 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnblw\" (UniqueName: \"kubernetes.io/projected/f4f159be-3253-461d-8ffa-b9866a262952-kube-api-access-jnblw\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005139 master-0 kubenswrapper[32968]: I0309 16:50:04.004868 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-grpc-tls\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005139 master-0 kubenswrapper[32968]: I0309 16:50:04.004996 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005363 master-0 kubenswrapper[32968]: I0309 16:50:04.005168 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f159be-3253-461d-8ffa-b9866a262952-metrics-client-ca\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005363 master-0 kubenswrapper[32968]: I0309 16:50:04.005221 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.005363 master-0 kubenswrapper[32968]: I0309 16:50:04.005246 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-tls\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.106960 master-0 kubenswrapper[32968]: I0309 16:50:04.106875 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.106960 master-0 kubenswrapper[32968]: I0309 16:50:04.106952 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.107504 master-0 kubenswrapper[32968]: I0309 16:50:04.107164 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnblw\" (UniqueName: \"kubernetes.io/projected/f4f159be-3253-461d-8ffa-b9866a262952-kube-api-access-jnblw\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.107504 master-0 kubenswrapper[32968]: I0309 16:50:04.107361 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-grpc-tls\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.107504 master-0 kubenswrapper[32968]: I0309 16:50:04.107446 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.107504 master-0 kubenswrapper[32968]: I0309 16:50:04.107472 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f159be-3253-461d-8ffa-b9866a262952-metrics-client-ca\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.107504 master-0 kubenswrapper[32968]: I0309 16:50:04.107503 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.107700 master-0 kubenswrapper[32968]: I0309 16:50:04.107534 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-tls\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.118580 master-0 kubenswrapper[32968]: I0309 16:50:04.109834 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f159be-3253-461d-8ffa-b9866a262952-metrics-client-ca\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.118580 master-0 kubenswrapper[32968]: I0309 16:50:04.115001 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.121601 master-0 kubenswrapper[32968]: I0309 16:50:04.121010 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-grpc-tls\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.122503 master-0 kubenswrapper[32968]: I0309 16:50:04.122449 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.125662 master-0 kubenswrapper[32968]: I0309 16:50:04.125593 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.126191 master-0 kubenswrapper[32968]: I0309 16:50:04.126141 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-tls\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.134510 master-0 kubenswrapper[32968]: I0309 16:50:04.134415 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/f4f159be-3253-461d-8ffa-b9866a262952-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.138345 master-0 kubenswrapper[32968]: I0309 16:50:04.138280 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnblw\" (UniqueName: \"kubernetes.io/projected/f4f159be-3253-461d-8ffa-b9866a262952-kube-api-access-jnblw\") pod \"thanos-querier-8ff5c97d7-ksqrg\" (UID: \"f4f159be-3253-461d-8ffa-b9866a262952\") " pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.188673 master-0 kubenswrapper[32968]: I0309 16:50:04.188554 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:04.411306 master-0 kubenswrapper[32968]: I0309 16:50:04.410005 32968 generic.go:334] "Generic (PLEG): container finished" podID="264ccb81-c8bb-4d2e-84e6-4d0d689f30f7" containerID="9e3ad16e7dd109b81f262382c08822c4195b21e10f43fd1272c98b34c9bb5deb" exitCode=0 Mar 09 16:50:04.411306 master-0 kubenswrapper[32968]: I0309 16:50:04.410088 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerDied","Data":"9e3ad16e7dd109b81f262382c08822c4195b21e10f43fd1272c98b34c9bb5deb"} Mar 09 16:50:04.411306 master-0 kubenswrapper[32968]: I0309 16:50:04.410129 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"bdd8a271e11cb04c83dcc4ef9f6478cfefd5557901c29ec7c6c60634a9dfa19f"} Mar 09 16:50:04.716475 master-0 kubenswrapper[32968]: I0309 16:50:04.715817 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg"] Mar 09 16:50:04.717554 master-0 kubenswrapper[32968]: W0309 16:50:04.717410 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4f159be_3253_461d_8ffa_b9866a262952.slice/crio-4653cb828078da085f6ff0a5a79a9839676d6369d266227130b8c89431ffd0ab WatchSource:0}: Error finding container 4653cb828078da085f6ff0a5a79a9839676d6369d266227130b8c89431ffd0ab: Status 404 returned error can't find the container with id 4653cb828078da085f6ff0a5a79a9839676d6369d266227130b8c89431ffd0ab Mar 09 16:50:05.429003 master-0 kubenswrapper[32968]: I0309 16:50:05.428360 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"4653cb828078da085f6ff0a5a79a9839676d6369d266227130b8c89431ffd0ab"} Mar 09 16:50:06.390967 master-0 kubenswrapper[32968]: I0309 16:50:06.390885 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-78689dfb94-ljj9k"] Mar 09 16:50:06.392416 master-0 kubenswrapper[32968]: I0309 16:50:06.392363 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.400673 master-0 kubenswrapper[32968]: I0309 16:50:06.400597 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2bs2215ssmiu1" Mar 09 16:50:06.414505 master-0 kubenswrapper[32968]: I0309 16:50:06.414363 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-78689dfb94-ljj9k"] Mar 09 16:50:06.425023 master-0 kubenswrapper[32968]: I0309 16:50:06.424921 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7c4558858-9rclt"] Mar 09 16:50:06.425441 master-0 kubenswrapper[32968]: I0309 16:50:06.425233 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" podUID="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" containerName="metrics-server" containerID="cri-o://7c7e82a8000eb584fb2d9fc14766cd7c65340bfb72b0d9d1871812e5a7249542" gracePeriod=170 Mar 09 16:50:06.575882 master-0 kubenswrapper[32968]: I0309 16:50:06.575760 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbkr\" (UniqueName: \"kubernetes.io/projected/9f90210c-4127-4580-826d-47ca6479d26b-kube-api-access-gbbkr\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.575882 master-0 kubenswrapper[32968]: I0309 16:50:06.575883 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f90210c-4127-4580-826d-47ca6479d26b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.577950 master-0 kubenswrapper[32968]: I0309 16:50:06.575954 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-client-ca-bundle\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.577950 master-0 kubenswrapper[32968]: I0309 16:50:06.575986 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-secret-metrics-client-certs\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.577950 master-0 kubenswrapper[32968]: I0309 16:50:06.576069 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f90210c-4127-4580-826d-47ca6479d26b-audit-log\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.577950 master-0 kubenswrapper[32968]: I0309 16:50:06.576192 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-secret-metrics-server-tls\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.577950 master-0 kubenswrapper[32968]: I0309 16:50:06.576680 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f90210c-4127-4580-826d-47ca6479d26b-metrics-server-audit-profiles\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.679543 master-0 kubenswrapper[32968]: I0309 16:50:06.679354 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f90210c-4127-4580-826d-47ca6479d26b-audit-log\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.679543 master-0 kubenswrapper[32968]: I0309 16:50:06.679486 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-secret-metrics-server-tls\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.679964 master-0 kubenswrapper[32968]: I0309 16:50:06.679604 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f90210c-4127-4580-826d-47ca6479d26b-metrics-server-audit-profiles\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.679964 master-0 kubenswrapper[32968]: I0309 16:50:06.679656 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbbkr\" (UniqueName: \"kubernetes.io/projected/9f90210c-4127-4580-826d-47ca6479d26b-kube-api-access-gbbkr\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.679964 master-0 kubenswrapper[32968]: I0309 16:50:06.679737 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f90210c-4127-4580-826d-47ca6479d26b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.680237 master-0 kubenswrapper[32968]: I0309 16:50:06.680173 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/9f90210c-4127-4580-826d-47ca6479d26b-audit-log\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.680326 master-0 kubenswrapper[32968]: I0309 16:50:06.680287 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-client-ca-bundle\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.680370 master-0 kubenswrapper[32968]: I0309 16:50:06.680336 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-secret-metrics-client-certs\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.683562 master-0 kubenswrapper[32968]: I0309 16:50:06.683523 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f90210c-4127-4580-826d-47ca6479d26b-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.684708 master-0 kubenswrapper[32968]: I0309 16:50:06.684672 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/9f90210c-4127-4580-826d-47ca6479d26b-metrics-server-audit-profiles\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.695747 master-0 kubenswrapper[32968]: I0309 16:50:06.695675 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-secret-metrics-server-tls\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.695747 master-0 kubenswrapper[32968]: I0309 16:50:06.695685 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-secret-metrics-client-certs\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.701974 master-0 kubenswrapper[32968]: I0309 16:50:06.701883 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbbkr\" (UniqueName: \"kubernetes.io/projected/9f90210c-4127-4580-826d-47ca6479d26b-kube-api-access-gbbkr\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.706180 master-0 kubenswrapper[32968]: I0309 16:50:06.705827 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f90210c-4127-4580-826d-47ca6479d26b-client-ca-bundle\") pod \"metrics-server-78689dfb94-ljj9k\" (UID: \"9f90210c-4127-4580-826d-47ca6479d26b\") " pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:06.767535 master-0 kubenswrapper[32968]: I0309 16:50:06.767456 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:08.684802 master-0 kubenswrapper[32968]: I0309 16:50:08.684674 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 09 16:50:08.688449 master-0 kubenswrapper[32968]: I0309 16:50:08.688364 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.699551 master-0 kubenswrapper[32968]: I0309 16:50:08.699474 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 09 16:50:08.700219 master-0 kubenswrapper[32968]: I0309 16:50:08.700188 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 09 16:50:08.700822 master-0 kubenswrapper[32968]: I0309 16:50:08.700791 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 09 16:50:08.708573 master-0 kubenswrapper[32968]: I0309 16:50:08.708504 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 09 16:50:08.708849 master-0 kubenswrapper[32968]: I0309 16:50:08.708813 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 09 16:50:08.709894 master-0 kubenswrapper[32968]: I0309 16:50:08.709744 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 09 16:50:08.710481 master-0 kubenswrapper[32968]: I0309 16:50:08.710390 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 09 16:50:08.710660 master-0 kubenswrapper[32968]: I0309 16:50:08.710611 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 09 16:50:08.711088 master-0 kubenswrapper[32968]: I0309 16:50:08.711064 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 09 16:50:08.711542 master-0 kubenswrapper[32968]: I0309 16:50:08.711383 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-at0vm6kg6eeon" Mar 09 16:50:08.716320 master-0 kubenswrapper[32968]: I0309 16:50:08.716226 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 09 16:50:08.729846 master-0 kubenswrapper[32968]: I0309 16:50:08.729767 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.762994 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5efa7b9a-886a-4e7a-a5cd-603fb2756746-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763066 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-config\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763097 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763129 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763174 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5efa7b9a-886a-4e7a-a5cd-603fb2756746-config-out\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763201 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763235 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763269 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763393 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763448 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763475 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-web-config\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763504 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763536 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763574 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763600 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763634 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763669 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.765702 master-0 kubenswrapper[32968]: I0309 16:50:08.763690 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg7b6\" (UniqueName: \"kubernetes.io/projected/5efa7b9a-886a-4e7a-a5cd-603fb2756746-kube-api-access-sg7b6\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.855836 master-0 kubenswrapper[32968]: I0309 16:50:08.855721 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865092 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865179 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg7b6\" (UniqueName: \"kubernetes.io/projected/5efa7b9a-886a-4e7a-a5cd-603fb2756746-kube-api-access-sg7b6\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865239 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5efa7b9a-886a-4e7a-a5cd-603fb2756746-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865269 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-config\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865292 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865315 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865353 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5efa7b9a-886a-4e7a-a5cd-603fb2756746-config-out\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865378 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.865518 master-0 kubenswrapper[32968]: I0309 16:50:08.865406 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865656 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865736 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865806 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865835 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865859 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-web-config\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865918 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865969 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.865989 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.866639 master-0 kubenswrapper[32968]: I0309 16:50:08.866018 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.869846 master-0 kubenswrapper[32968]: I0309 16:50:08.869797 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.870355 master-0 kubenswrapper[32968]: I0309 16:50:08.870324 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.871713 master-0 kubenswrapper[32968]: I0309 16:50:08.871525 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.874046 master-0 kubenswrapper[32968]: I0309 16:50:08.873982 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.874639 master-0 kubenswrapper[32968]: I0309 16:50:08.874596 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.874843 master-0 kubenswrapper[32968]: I0309 16:50:08.874804 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.875454 master-0 kubenswrapper[32968]: I0309 16:50:08.875018 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.875992 master-0 kubenswrapper[32968]: I0309 16:50:08.875958 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-config\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.882355 master-0 kubenswrapper[32968]: I0309 16:50:08.882242 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-web-config\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.883398 master-0 kubenswrapper[32968]: I0309 16:50:08.882833 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5efa7b9a-886a-4e7a-a5cd-603fb2756746-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.884105 master-0 kubenswrapper[32968]: I0309 16:50:08.884061 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5efa7b9a-886a-4e7a-a5cd-603fb2756746-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.887517 master-0 kubenswrapper[32968]: I0309 16:50:08.887377 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.888137 master-0 kubenswrapper[32968]: I0309 16:50:08.888083 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.888609 master-0 kubenswrapper[32968]: I0309 16:50:08.888494 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.888609 master-0 kubenswrapper[32968]: I0309 16:50:08.888573 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.889839 master-0 kubenswrapper[32968]: I0309 16:50:08.889766 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5efa7b9a-886a-4e7a-a5cd-603fb2756746-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.892529 master-0 kubenswrapper[32968]: I0309 16:50:08.892311 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5efa7b9a-886a-4e7a-a5cd-603fb2756746-config-out\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:08.897292 master-0 kubenswrapper[32968]: I0309 16:50:08.897207 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg7b6\" (UniqueName: \"kubernetes.io/projected/5efa7b9a-886a-4e7a-a5cd-603fb2756746-kube-api-access-sg7b6\") pod \"prometheus-k8s-0\" (UID: \"5efa7b9a-886a-4e7a-a5cd-603fb2756746\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:09.021489 master-0 kubenswrapper[32968]: I0309 16:50:09.021244 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:09.504496 master-0 kubenswrapper[32968]: I0309 16:50:09.504368 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-78689dfb94-ljj9k"] Mar 09 16:50:09.510452 master-0 kubenswrapper[32968]: I0309 16:50:09.508684 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"52438c94a4577c44af68c7ec2b2a07991fb8db78706991bc8ee53f243a835634"} Mar 09 16:50:09.510452 master-0 kubenswrapper[32968]: I0309 16:50:09.508759 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"cd467e93630fe3370a604990cba932617b69d444f1bfe7d3fc2ff78f1058be34"} Mar 09 16:50:09.518748 master-0 kubenswrapper[32968]: I0309 16:50:09.518122 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"de2f5f39a86d315166b69249c1015de4e534f3247681e005b5471c4ff7522831"} Mar 09 16:50:09.518748 master-0 kubenswrapper[32968]: I0309 16:50:09.518203 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"ceb6ee8f8f0d5ac37af1c09eaf9b23bb6f6c6bd64f18fc5bd3a595d4cde84c9b"} Mar 09 16:50:09.719365 master-0 kubenswrapper[32968]: I0309 16:50:09.718739 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 09 16:50:10.540385 master-0 kubenswrapper[32968]: I0309 16:50:10.540155 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" event={"ID":"9f90210c-4127-4580-826d-47ca6479d26b","Type":"ContainerStarted","Data":"e53eb783229fb8621d770dd8837654ebd5da39d22da1f6ad7f3453ea396ed70e"} Mar 09 16:50:10.540385 master-0 kubenswrapper[32968]: I0309 16:50:10.540300 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" event={"ID":"9f90210c-4127-4580-826d-47ca6479d26b","Type":"ContainerStarted","Data":"6abb6504507331fe989cae833c04f60cb1023e6fa9dbe684ef101f3c06b057d6"} Mar 09 16:50:10.544217 master-0 kubenswrapper[32968]: I0309 16:50:10.544147 32968 generic.go:334] "Generic (PLEG): container finished" podID="5efa7b9a-886a-4e7a-a5cd-603fb2756746" containerID="13124d425ff3085760478e1a9ec1c0e4c89648589f1079af6fde14d5fadf093e" exitCode=0 Mar 09 16:50:10.544308 master-0 kubenswrapper[32968]: I0309 16:50:10.544208 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerDied","Data":"13124d425ff3085760478e1a9ec1c0e4c89648589f1079af6fde14d5fadf093e"} Mar 09 16:50:10.544308 master-0 kubenswrapper[32968]: I0309 16:50:10.544256 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"e0d4dfb00b6c366c39b8a922831c060be7dc349db003deed7a6b377d041090e8"} Mar 09 16:50:10.557767 master-0 kubenswrapper[32968]: I0309 16:50:10.557657 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"9873e8ce0f08b526a64958c8f8e51a63b9db783d84cf97d897396569426a74ad"} Mar 09 16:50:10.557767 master-0 kubenswrapper[32968]: I0309 16:50:10.557761 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"091d3f4dd8d65894bfbf919598f6abef438386e7e1555d05a3c6803c5188f91d"} Mar 09 16:50:10.557767 master-0 kubenswrapper[32968]: I0309 16:50:10.557781 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"d5d345167994e96c4fb23de4405ce09ee80a10cd17221c1b01f18f5a8a98cd2a"} Mar 09 16:50:10.563177 master-0 kubenswrapper[32968]: I0309 16:50:10.562852 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"6ab307e8d6413906c670bbc958ef42ddde552841ed4b008d1e92cd8f041f8a59"} Mar 09 16:50:10.572834 master-0 kubenswrapper[32968]: I0309 16:50:10.572711 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" podStartSLOduration=4.572657471 podStartE2EDuration="4.572657471s" podCreationTimestamp="2026-03-09 16:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:50:10.566811354 +0000 UTC m=+236.670133914" watchObservedRunningTime="2026-03-09 16:50:10.572657471 +0000 UTC m=+236.675980011" Mar 09 16:50:11.585741 master-0 kubenswrapper[32968]: I0309 16:50:11.585648 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"264ccb81-c8bb-4d2e-84e6-4d0d689f30f7","Type":"ContainerStarted","Data":"38a9c72c94eee2496058d2ecbd9bf6651ba820001fe7a3c75f0a7656d7dddf05"} Mar 09 16:50:11.596719 master-0 kubenswrapper[32968]: I0309 16:50:11.595638 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"80a022cada6e69681fd9f6f998eaf1fe8739601def1154ae2504a30e81f791a8"} Mar 09 16:50:11.596719 master-0 kubenswrapper[32968]: I0309 16:50:11.595788 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:11.596719 master-0 kubenswrapper[32968]: I0309 16:50:11.595807 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"f9304b38fc29606e27c95560481bc7da3f96427d449c6a073d08311ad8ca3a6e"} Mar 09 16:50:11.596719 master-0 kubenswrapper[32968]: I0309 16:50:11.595821 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" event={"ID":"f4f159be-3253-461d-8ffa-b9866a262952","Type":"ContainerStarted","Data":"7c92479d0d2de8c5afce2d314cebe928a0ef5350389e9e222a7253b4f0cf803e"} Mar 09 16:50:11.627499 master-0 kubenswrapper[32968]: I0309 16:50:11.626976 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.115374087 podStartE2EDuration="9.626938333s" podCreationTimestamp="2026-03-09 16:50:02 +0000 UTC" firstStartedPulling="2026-03-09 16:50:04.412214378 +0000 UTC m=+230.515536918" lastFinishedPulling="2026-03-09 16:50:10.923778624 +0000 UTC m=+237.027101164" observedRunningTime="2026-03-09 16:50:11.622620018 +0000 UTC m=+237.725942568" watchObservedRunningTime="2026-03-09 16:50:11.626938333 +0000 UTC m=+237.730260893" Mar 09 16:50:11.661915 master-0 kubenswrapper[32968]: I0309 16:50:11.661795 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" podStartSLOduration=2.460769797 podStartE2EDuration="8.661766217s" podCreationTimestamp="2026-03-09 16:50:03 +0000 UTC" firstStartedPulling="2026-03-09 16:50:04.721286305 +0000 UTC m=+230.824608845" lastFinishedPulling="2026-03-09 16:50:10.922282705 +0000 UTC m=+237.025605265" observedRunningTime="2026-03-09 16:50:11.66002102 +0000 UTC m=+237.763343580" watchObservedRunningTime="2026-03-09 16:50:11.661766217 +0000 UTC m=+237.765088767" Mar 09 16:50:14.199375 master-0 kubenswrapper[32968]: I0309 16:50:14.199287 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8ff5c97d7-ksqrg" Mar 09 16:50:16.681804 master-0 kubenswrapper[32968]: I0309 16:50:16.681752 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"c3b975aa0245ad5790ddd4fd7041062ea12d036db02be75e3a2ec06a6d503665"} Mar 09 16:50:16.682336 master-0 kubenswrapper[32968]: I0309 16:50:16.681823 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"604158aa438e046e8d1ee23c9422efd63e72710c3635bda790d2c9a01ac03091"} Mar 09 16:50:16.682336 master-0 kubenswrapper[32968]: I0309 16:50:16.681834 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"d16e8837be77a600f24c97e07a915ea12b0c8666d3396996d9f4da847ed75260"} Mar 09 16:50:16.682336 master-0 kubenswrapper[32968]: I0309 16:50:16.681845 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"81d46cd1f6d1881772df343289292f848f7eee6ffe921add10ffe56b622978d1"} Mar 09 16:50:17.696223 master-0 kubenswrapper[32968]: I0309 16:50:17.696121 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"25b084f465c3039a6a2ad7dc8bd2e0d13712c1bfb82d11c18e225c6f9a56f4eb"} Mar 09 16:50:17.696223 master-0 kubenswrapper[32968]: I0309 16:50:17.696192 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5efa7b9a-886a-4e7a-a5cd-603fb2756746","Type":"ContainerStarted","Data":"a4943d624153dbb6d71cf219cf0eb0fa948b938b1655d368996a56a3b24416ee"} Mar 09 16:50:17.749713 master-0 kubenswrapper[32968]: I0309 16:50:17.749598 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.614182732 podStartE2EDuration="9.749580795s" podCreationTimestamp="2026-03-09 16:50:08 +0000 UTC" firstStartedPulling="2026-03-09 16:50:10.551380701 +0000 UTC m=+236.654703241" lastFinishedPulling="2026-03-09 16:50:15.686778764 +0000 UTC m=+241.790101304" observedRunningTime="2026-03-09 16:50:17.743291607 +0000 UTC m=+243.846614157" watchObservedRunningTime="2026-03-09 16:50:17.749580795 +0000 UTC m=+243.852903325" Mar 09 16:50:19.022305 master-0 kubenswrapper[32968]: I0309 16:50:19.022224 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:50:26.769750 master-0 kubenswrapper[32968]: I0309 16:50:26.769665 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:26.769750 master-0 kubenswrapper[32968]: I0309 16:50:26.769747 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:46.774990 master-0 kubenswrapper[32968]: I0309 16:50:46.774907 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:50:46.779156 master-0 kubenswrapper[32968]: I0309 16:50:46.779109 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-78689dfb94-ljj9k" Mar 09 16:51:09.022074 master-0 kubenswrapper[32968]: I0309 16:51:09.021974 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:51:09.055828 master-0 kubenswrapper[32968]: I0309 16:51:09.055755 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:51:09.136370 master-0 kubenswrapper[32968]: I0309 16:51:09.136301 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 09 16:51:14.072878 master-0 kubenswrapper[32968]: I0309 16:51:14.072812 32968 kubelet.go:1505] "Image garbage collection succeeded" Mar 09 16:51:39.734630 master-0 kubenswrapper[32968]: I0309 16:51:39.734540 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c86f77cc4-dksdp"] Mar 09 16:51:39.735868 master-0 kubenswrapper[32968]: I0309 16:51:39.735821 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.768746 master-0 kubenswrapper[32968]: I0309 16:51:39.768672 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c86f77cc4-dksdp"] Mar 09 16:51:39.824361 master-0 kubenswrapper[32968]: I0309 16:51:39.824273 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-oauth-config\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.824361 master-0 kubenswrapper[32968]: I0309 16:51:39.824345 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-trusted-ca-bundle\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.824729 master-0 kubenswrapper[32968]: I0309 16:51:39.824385 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-serving-cert\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.824729 master-0 kubenswrapper[32968]: I0309 16:51:39.824414 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-oauth-serving-cert\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.824729 master-0 kubenswrapper[32968]: I0309 16:51:39.824548 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-config\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.824729 master-0 kubenswrapper[32968]: I0309 16:51:39.824597 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c2cw\" (UniqueName: \"kubernetes.io/projected/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-kube-api-access-9c2cw\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.824729 master-0 kubenswrapper[32968]: I0309 16:51:39.824640 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-service-ca\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.927122 master-0 kubenswrapper[32968]: I0309 16:51:39.926989 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-service-ca\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.927122 master-0 kubenswrapper[32968]: I0309 16:51:39.927096 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-oauth-config\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.927122 master-0 kubenswrapper[32968]: I0309 16:51:39.927123 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-trusted-ca-bundle\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.927817 master-0 kubenswrapper[32968]: I0309 16:51:39.927719 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-serving-cert\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.928499 master-0 kubenswrapper[32968]: I0309 16:51:39.927955 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-oauth-serving-cert\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.928499 master-0 kubenswrapper[32968]: I0309 16:51:39.928077 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-config\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.928499 master-0 kubenswrapper[32968]: I0309 16:51:39.928136 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-service-ca\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.928499 master-0 kubenswrapper[32968]: I0309 16:51:39.928158 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c2cw\" (UniqueName: \"kubernetes.io/projected/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-kube-api-access-9c2cw\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.928499 master-0 kubenswrapper[32968]: I0309 16:51:39.928447 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-trusted-ca-bundle\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.928992 master-0 kubenswrapper[32968]: I0309 16:51:39.928692 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-oauth-serving-cert\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.929040 master-0 kubenswrapper[32968]: I0309 16:51:39.928986 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-config\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.932122 master-0 kubenswrapper[32968]: I0309 16:51:39.932086 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-oauth-config\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.932354 master-0 kubenswrapper[32968]: I0309 16:51:39.932135 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-serving-cert\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:39.947712 master-0 kubenswrapper[32968]: I0309 16:51:39.947653 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c2cw\" (UniqueName: \"kubernetes.io/projected/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-kube-api-access-9c2cw\") pod \"console-c86f77cc4-dksdp\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:40.055990 master-0 kubenswrapper[32968]: I0309 16:51:40.055745 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:40.467397 master-0 kubenswrapper[32968]: I0309 16:51:40.467341 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c86f77cc4-dksdp"] Mar 09 16:51:40.474270 master-0 kubenswrapper[32968]: W0309 16:51:40.474204 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec7ce2b_51b7_4d03_ab76_a5f7220b8c1b.slice/crio-4961cea8312e72699a161f8866e24918d705feedc5c457f1ce7900ee521ccf26 WatchSource:0}: Error finding container 4961cea8312e72699a161f8866e24918d705feedc5c457f1ce7900ee521ccf26: Status 404 returned error can't find the container with id 4961cea8312e72699a161f8866e24918d705feedc5c457f1ce7900ee521ccf26 Mar 09 16:51:41.375148 master-0 kubenswrapper[32968]: I0309 16:51:41.374960 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c86f77cc4-dksdp" event={"ID":"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b","Type":"ContainerStarted","Data":"9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe"} Mar 09 16:51:41.375148 master-0 kubenswrapper[32968]: I0309 16:51:41.375026 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c86f77cc4-dksdp" event={"ID":"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b","Type":"ContainerStarted","Data":"4961cea8312e72699a161f8866e24918d705feedc5c457f1ce7900ee521ccf26"} Mar 09 16:51:41.402338 master-0 kubenswrapper[32968]: I0309 16:51:41.402220 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c86f77cc4-dksdp" podStartSLOduration=2.402199557 podStartE2EDuration="2.402199557s" podCreationTimestamp="2026-03-09 16:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:51:41.397143791 +0000 UTC m=+327.500466351" watchObservedRunningTime="2026-03-09 16:51:41.402199557 +0000 UTC m=+327.505522097" Mar 09 16:51:50.056014 master-0 kubenswrapper[32968]: I0309 16:51:50.055935 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:50.056014 master-0 kubenswrapper[32968]: I0309 16:51:50.056013 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:50.063586 master-0 kubenswrapper[32968]: I0309 16:51:50.063542 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:50.447937 master-0 kubenswrapper[32968]: I0309 16:51:50.447874 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:51:50.528393 master-0 kubenswrapper[32968]: I0309 16:51:50.528338 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bd7656797-fzjhw"] Mar 09 16:52:03.265510 master-0 kubenswrapper[32968]: I0309 16:52:03.265407 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 09 16:52:03.266651 master-0 kubenswrapper[32968]: I0309 16:52:03.266610 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.270771 master-0 kubenswrapper[32968]: I0309 16:52:03.269231 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-7kft5" Mar 09 16:52:03.270771 master-0 kubenswrapper[32968]: I0309 16:52:03.269935 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 09 16:52:03.274638 master-0 kubenswrapper[32968]: I0309 16:52:03.274571 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 09 16:52:03.323040 master-0 kubenswrapper[32968]: I0309 16:52:03.322880 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.323503 master-0 kubenswrapper[32968]: I0309 16:52:03.323394 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.323644 master-0 kubenswrapper[32968]: I0309 16:52:03.323626 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-var-lock\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.425169 master-0 kubenswrapper[32968]: I0309 16:52:03.425091 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.425392 master-0 kubenswrapper[32968]: I0309 16:52:03.425247 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.425392 master-0 kubenswrapper[32968]: I0309 16:52:03.425315 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.425392 master-0 kubenswrapper[32968]: I0309 16:52:03.425349 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-var-lock\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.425555 master-0 kubenswrapper[32968]: I0309 16:52:03.425477 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-var-lock\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.441034 master-0 kubenswrapper[32968]: I0309 16:52:03.440986 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:03.584955 master-0 kubenswrapper[32968]: I0309 16:52:03.584815 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:04.003685 master-0 kubenswrapper[32968]: I0309 16:52:04.003638 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 09 16:52:04.007759 master-0 kubenswrapper[32968]: W0309 16:52:04.007663 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2f32f448_cfdc_4de7_906e_ca2e7bce6c1c.slice/crio-ee20bdc5388f90c443885389ea85ed801785cb6ee4093d9066aa6e57b2167bcd WatchSource:0}: Error finding container ee20bdc5388f90c443885389ea85ed801785cb6ee4093d9066aa6e57b2167bcd: Status 404 returned error can't find the container with id ee20bdc5388f90c443885389ea85ed801785cb6ee4093d9066aa6e57b2167bcd Mar 09 16:52:04.572804 master-0 kubenswrapper[32968]: I0309 16:52:04.572705 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c","Type":"ContainerStarted","Data":"3a2db81abfd908cab919c9e6212fbb15f49f0fe26219c0b3f43add9a1be6e0e8"} Mar 09 16:52:04.572804 master-0 kubenswrapper[32968]: I0309 16:52:04.572772 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c","Type":"ContainerStarted","Data":"ee20bdc5388f90c443885389ea85ed801785cb6ee4093d9066aa6e57b2167bcd"} Mar 09 16:52:04.598982 master-0 kubenswrapper[32968]: I0309 16:52:04.598835 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=1.598808249 podStartE2EDuration="1.598808249s" podCreationTimestamp="2026-03-09 16:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:52:04.594708989 +0000 UTC m=+350.698031539" watchObservedRunningTime="2026-03-09 16:52:04.598808249 +0000 UTC m=+350.702130789" Mar 09 16:52:15.565782 master-0 kubenswrapper[32968]: I0309 16:52:15.565668 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bd7656797-fzjhw" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" containerID="cri-o://6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97" gracePeriod=15 Mar 09 16:52:15.978415 master-0 kubenswrapper[32968]: I0309 16:52:15.978366 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bd7656797-fzjhw_4f93fd52-1872-4223-962b-c608b2737866/console/0.log" Mar 09 16:52:15.978836 master-0 kubenswrapper[32968]: I0309 16:52:15.978455 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:52:16.025965 master-0 kubenswrapper[32968]: I0309 16:52:16.025877 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-oauth-config\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.025965 master-0 kubenswrapper[32968]: I0309 16:52:16.025988 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rc9k\" (UniqueName: \"kubernetes.io/projected/4f93fd52-1872-4223-962b-c608b2737866-kube-api-access-7rc9k\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.026476 master-0 kubenswrapper[32968]: I0309 16:52:16.026038 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-oauth-serving-cert\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.026476 master-0 kubenswrapper[32968]: I0309 16:52:16.026163 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-service-ca\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.026476 master-0 kubenswrapper[32968]: I0309 16:52:16.026197 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-console-config\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.026476 master-0 kubenswrapper[32968]: I0309 16:52:16.026219 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-trusted-ca-bundle\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.026476 master-0 kubenswrapper[32968]: I0309 16:52:16.026255 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-serving-cert\") pod \"4f93fd52-1872-4223-962b-c608b2737866\" (UID: \"4f93fd52-1872-4223-962b-c608b2737866\") " Mar 09 16:52:16.026902 master-0 kubenswrapper[32968]: I0309 16:52:16.026827 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-console-config" (OuterVolumeSpecName: "console-config") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:52:16.026902 master-0 kubenswrapper[32968]: I0309 16:52:16.026886 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:52:16.027012 master-0 kubenswrapper[32968]: I0309 16:52:16.026843 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-service-ca" (OuterVolumeSpecName: "service-ca") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:52:16.027012 master-0 kubenswrapper[32968]: I0309 16:52:16.026918 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:52:16.029550 master-0 kubenswrapper[32968]: I0309 16:52:16.029483 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:52:16.031014 master-0 kubenswrapper[32968]: I0309 16:52:16.030956 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f93fd52-1872-4223-962b-c608b2737866-kube-api-access-7rc9k" (OuterVolumeSpecName: "kube-api-access-7rc9k") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "kube-api-access-7rc9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:52:16.031230 master-0 kubenswrapper[32968]: I0309 16:52:16.031162 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4f93fd52-1872-4223-962b-c608b2737866" (UID: "4f93fd52-1872-4223-962b-c608b2737866"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128342 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rc9k\" (UniqueName: \"kubernetes.io/projected/4f93fd52-1872-4223-962b-c608b2737866-kube-api-access-7rc9k\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128394 32968 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128406 32968 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128434 32968 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-console-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128445 32968 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f93fd52-1872-4223-962b-c608b2737866-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128453 32968 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.128575 master-0 kubenswrapper[32968]: I0309 16:52:16.128461 32968 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4f93fd52-1872-4223-962b-c608b2737866-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:16.664734 master-0 kubenswrapper[32968]: I0309 16:52:16.664633 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bd7656797-fzjhw_4f93fd52-1872-4223-962b-c608b2737866/console/0.log" Mar 09 16:52:16.664734 master-0 kubenswrapper[32968]: I0309 16:52:16.664698 32968 generic.go:334] "Generic (PLEG): container finished" podID="4f93fd52-1872-4223-962b-c608b2737866" containerID="6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97" exitCode=2 Mar 09 16:52:16.664734 master-0 kubenswrapper[32968]: I0309 16:52:16.664735 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd7656797-fzjhw" event={"ID":"4f93fd52-1872-4223-962b-c608b2737866","Type":"ContainerDied","Data":"6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97"} Mar 09 16:52:16.664734 master-0 kubenswrapper[32968]: I0309 16:52:16.664763 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd7656797-fzjhw" Mar 09 16:52:16.664734 master-0 kubenswrapper[32968]: I0309 16:52:16.664775 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd7656797-fzjhw" event={"ID":"4f93fd52-1872-4223-962b-c608b2737866","Type":"ContainerDied","Data":"c921962f3fbc046e5f45d71084ca5ce183fa630fc6552964a58710fbb9010e68"} Mar 09 16:52:16.665921 master-0 kubenswrapper[32968]: I0309 16:52:16.664818 32968 scope.go:117] "RemoveContainer" containerID="6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97" Mar 09 16:52:16.687164 master-0 kubenswrapper[32968]: I0309 16:52:16.687085 32968 scope.go:117] "RemoveContainer" containerID="6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97" Mar 09 16:52:16.687523 master-0 kubenswrapper[32968]: E0309 16:52:16.687482 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97\": container with ID starting with 6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97 not found: ID does not exist" containerID="6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97" Mar 09 16:52:16.687583 master-0 kubenswrapper[32968]: I0309 16:52:16.687522 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97"} err="failed to get container status \"6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97\": rpc error: code = NotFound desc = could not find container \"6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97\": container with ID starting with 6715f61bc899ff6cb837db73fcef90ee21122772461221dbfbeb24444c24ff97 not found: ID does not exist" Mar 09 16:52:16.726883 master-0 kubenswrapper[32968]: I0309 16:52:16.726811 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bd7656797-fzjhw"] Mar 09 16:52:16.742667 master-0 kubenswrapper[32968]: I0309 16:52:16.742587 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bd7656797-fzjhw"] Mar 09 16:52:16.773528 master-0 kubenswrapper[32968]: I0309 16:52:16.773449 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-m5fg9"] Mar 09 16:52:16.774123 master-0 kubenswrapper[32968]: E0309 16:52:16.773878 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" Mar 09 16:52:16.774123 master-0 kubenswrapper[32968]: I0309 16:52:16.774087 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" Mar 09 16:52:16.774524 master-0 kubenswrapper[32968]: I0309 16:52:16.774487 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f93fd52-1872-4223-962b-c608b2737866" containerName="console" Mar 09 16:52:16.775218 master-0 kubenswrapper[32968]: I0309 16:52:16.775185 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.777905 master-0 kubenswrapper[32968]: I0309 16:52:16.777726 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 09 16:52:16.778495 master-0 kubenswrapper[32968]: I0309 16:52:16.778063 32968 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 09 16:52:16.778850 master-0 kubenswrapper[32968]: I0309 16:52:16.778807 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 09 16:52:16.779130 master-0 kubenswrapper[32968]: I0309 16:52:16.779079 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 09 16:52:16.802199 master-0 kubenswrapper[32968]: I0309 16:52:16.802098 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-m5fg9"] Mar 09 16:52:16.837936 master-0 kubenswrapper[32968]: I0309 16:52:16.837770 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bjq\" (UniqueName: \"kubernetes.io/projected/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-kube-api-access-v9bjq\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.838222 master-0 kubenswrapper[32968]: I0309 16:52:16.838099 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.838277 master-0 kubenswrapper[32968]: I0309 16:52:16.838246 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-os-client-config\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.939913 master-0 kubenswrapper[32968]: I0309 16:52:16.939843 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.940347 master-0 kubenswrapper[32968]: I0309 16:52:16.939961 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-os-client-config\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.940347 master-0 kubenswrapper[32968]: I0309 16:52:16.940033 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bjq\" (UniqueName: \"kubernetes.io/projected/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-kube-api-access-v9bjq\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.941282 master-0 kubenswrapper[32968]: I0309 16:52:16.941220 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.944336 master-0 kubenswrapper[32968]: I0309 16:52:16.944269 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-os-client-config\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:16.961957 master-0 kubenswrapper[32968]: I0309 16:52:16.961851 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bjq\" (UniqueName: \"kubernetes.io/projected/516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4-kube-api-access-v9bjq\") pod \"sushy-emulator-78f6d7d749-m5fg9\" (UID: \"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:17.115542 master-0 kubenswrapper[32968]: I0309 16:52:17.115400 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:17.536923 master-0 kubenswrapper[32968]: I0309 16:52:17.536861 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-m5fg9"] Mar 09 16:52:17.541584 master-0 kubenswrapper[32968]: W0309 16:52:17.541511 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod516280d4_f9f7_4d3c_aa9d_53bf0d8c50a4.slice/crio-c0e2db66a81a77fd0550bc231550dd61563af5d1e12396b804f51a270928e1c1 WatchSource:0}: Error finding container c0e2db66a81a77fd0550bc231550dd61563af5d1e12396b804f51a270928e1c1: Status 404 returned error can't find the container with id c0e2db66a81a77fd0550bc231550dd61563af5d1e12396b804f51a270928e1c1 Mar 09 16:52:17.543877 master-0 kubenswrapper[32968]: I0309 16:52:17.543827 32968 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 16:52:17.675442 master-0 kubenswrapper[32968]: I0309 16:52:17.675361 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" event={"ID":"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4","Type":"ContainerStarted","Data":"c0e2db66a81a77fd0550bc231550dd61563af5d1e12396b804f51a270928e1c1"} Mar 09 16:52:18.094200 master-0 kubenswrapper[32968]: I0309 16:52:18.094131 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f93fd52-1872-4223-962b-c608b2737866" path="/var/lib/kubelet/pods/4f93fd52-1872-4223-962b-c608b2737866/volumes" Mar 09 16:52:24.723937 master-0 kubenswrapper[32968]: I0309 16:52:24.723877 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" event={"ID":"516280d4-f9f7-4d3c-aa9d-53bf0d8c50a4","Type":"ContainerStarted","Data":"96872993b09f16e142ed30f7f725b1689c98e7b93686c08a13677e17248c585c"} Mar 09 16:52:24.749138 master-0 kubenswrapper[32968]: I0309 16:52:24.749018 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" podStartSLOduration=2.17403712 podStartE2EDuration="8.748994024s" podCreationTimestamp="2026-03-09 16:52:16 +0000 UTC" firstStartedPulling="2026-03-09 16:52:17.54369259 +0000 UTC m=+363.647015130" lastFinishedPulling="2026-03-09 16:52:24.118649494 +0000 UTC m=+370.221972034" observedRunningTime="2026-03-09 16:52:24.743039555 +0000 UTC m=+370.846362095" watchObservedRunningTime="2026-03-09 16:52:24.748994024 +0000 UTC m=+370.852316574" Mar 09 16:52:27.116114 master-0 kubenswrapper[32968]: I0309 16:52:27.116039 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:27.116114 master-0 kubenswrapper[32968]: I0309 16:52:27.116113 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:27.126626 master-0 kubenswrapper[32968]: I0309 16:52:27.126575 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:27.747666 master-0 kubenswrapper[32968]: I0309 16:52:27.747586 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-m5fg9" Mar 09 16:52:36.823687 master-0 kubenswrapper[32968]: I0309 16:52:36.823491 32968 generic.go:334] "Generic (PLEG): container finished" podID="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" containerID="7c7e82a8000eb584fb2d9fc14766cd7c65340bfb72b0d9d1871812e5a7249542" exitCode=0 Mar 09 16:52:36.823687 master-0 kubenswrapper[32968]: I0309 16:52:36.823610 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" event={"ID":"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b","Type":"ContainerDied","Data":"7c7e82a8000eb584fb2d9fc14766cd7c65340bfb72b0d9d1871812e5a7249542"} Mar 09 16:52:37.213953 master-0 kubenswrapper[32968]: I0309 16:52:37.213890 32968 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:52:37.214799 master-0 kubenswrapper[32968]: I0309 16:52:37.214239 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="cluster-policy-controller" containerID="cri-o://4cd8903e8e22ba82f42ce990c7d672d208e9b2502ddb3553b9e1798f91e13ece" gracePeriod=30 Mar 09 16:52:37.214799 master-0 kubenswrapper[32968]: I0309 16:52:37.214410 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://3c38b2115cd52d1efef54c2999128dc674a18b9803bfdcdba9d9e455d6aa049a" gracePeriod=30 Mar 09 16:52:37.214799 master-0 kubenswrapper[32968]: I0309 16:52:37.214461 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://55b9dd03a97a7153346e305d1d756d1e7bf45a58d0547c62d3e8a40594f9dbaa" gracePeriod=30 Mar 09 16:52:37.214799 master-0 kubenswrapper[32968]: I0309 16:52:37.214686 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" containerID="cri-o://911cba7e1f9cb852c637561f891e3b5a982532d757d88a06ff9aebcbd7c475c2" gracePeriod=30 Mar 09 16:52:37.216053 master-0 kubenswrapper[32968]: I0309 16:52:37.215694 32968 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: E0309 16:52:37.216118 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: I0309 16:52:37.216134 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: E0309 16:52:37.216145 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-recovery-controller" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: I0309 16:52:37.216152 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-recovery-controller" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: E0309 16:52:37.216170 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-cert-syncer" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: I0309 16:52:37.216177 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-cert-syncer" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: E0309 16:52:37.216195 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="cluster-policy-controller" Mar 09 16:52:37.216239 master-0 kubenswrapper[32968]: I0309 16:52:37.216201 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="cluster-policy-controller" Mar 09 16:52:37.216502 master-0 kubenswrapper[32968]: I0309 16:52:37.216441 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" Mar 09 16:52:37.216592 master-0 kubenswrapper[32968]: I0309 16:52:37.216559 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-cert-syncer" Mar 09 16:52:37.216592 master-0 kubenswrapper[32968]: I0309 16:52:37.216582 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="cluster-policy-controller" Mar 09 16:52:37.216679 master-0 kubenswrapper[32968]: I0309 16:52:37.216599 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager-recovery-controller" Mar 09 16:52:37.216775 master-0 kubenswrapper[32968]: E0309 16:52:37.216755 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" Mar 09 16:52:37.216775 master-0 kubenswrapper[32968]: I0309 16:52:37.216769 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" Mar 09 16:52:37.216903 master-0 kubenswrapper[32968]: I0309 16:52:37.216880 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" containerName="kube-controller-manager" Mar 09 16:52:37.308906 master-0 kubenswrapper[32968]: I0309 16:52:37.308751 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/77dc95537478dce4d04a84d6f7508175-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"77dc95537478dce4d04a84d6f7508175\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.308906 master-0 kubenswrapper[32968]: I0309 16:52:37.308872 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/77dc95537478dce4d04a84d6f7508175-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"77dc95537478dce4d04a84d6f7508175\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.410678 master-0 kubenswrapper[32968]: I0309 16:52:37.410613 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/77dc95537478dce4d04a84d6f7508175-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"77dc95537478dce4d04a84d6f7508175\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.411161 master-0 kubenswrapper[32968]: I0309 16:52:37.410784 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/77dc95537478dce4d04a84d6f7508175-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"77dc95537478dce4d04a84d6f7508175\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.411161 master-0 kubenswrapper[32968]: I0309 16:52:37.411131 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/77dc95537478dce4d04a84d6f7508175-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"77dc95537478dce4d04a84d6f7508175\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.411566 master-0 kubenswrapper[32968]: I0309 16:52:37.411479 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/77dc95537478dce4d04a84d6f7508175-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"77dc95537478dce4d04a84d6f7508175\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.424254 master-0 kubenswrapper[32968]: I0309 16:52:37.424180 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:52:37.433329 master-0 kubenswrapper[32968]: I0309 16:52:37.433274 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager-cert-syncer/0.log" Mar 09 16:52:37.434417 master-0 kubenswrapper[32968]: I0309 16:52:37.434380 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager/0.log" Mar 09 16:52:37.434523 master-0 kubenswrapper[32968]: I0309 16:52:37.434488 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.484350 master-0 kubenswrapper[32968]: I0309 16:52:37.484275 32968 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="4ee901e15ed65fb7aa5785ec8ec0563e" podUID="77dc95537478dce4d04a84d6f7508175" Mar 09 16:52:37.512262 master-0 kubenswrapper[32968]: I0309 16:52:37.512219 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.512402 master-0 kubenswrapper[32968]: I0309 16:52:37.512275 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.512402 master-0 kubenswrapper[32968]: I0309 16:52:37.512321 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.512402 master-0 kubenswrapper[32968]: I0309 16:52:37.512391 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.513076 master-0 kubenswrapper[32968]: I0309 16:52:37.513004 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log" (OuterVolumeSpecName: "audit-log") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:52:37.513255 master-0 kubenswrapper[32968]: I0309 16:52:37.513191 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:52:37.513802 master-0 kubenswrapper[32968]: I0309 16:52:37.513770 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.513916 master-0 kubenswrapper[32968]: I0309 16:52:37.513870 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.513984 master-0 kubenswrapper[32968]: I0309 16:52:37.513961 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") pod \"4ee901e15ed65fb7aa5785ec8ec0563e\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " Mar 09 16:52:37.514543 master-0 kubenswrapper[32968]: I0309 16:52:37.514491 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") pod \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\" (UID: \"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b\") " Mar 09 16:52:37.514543 master-0 kubenswrapper[32968]: I0309 16:52:37.514516 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:52:37.514689 master-0 kubenswrapper[32968]: I0309 16:52:37.514528 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") pod \"4ee901e15ed65fb7aa5785ec8ec0563e\" (UID: \"4ee901e15ed65fb7aa5785ec8ec0563e\") " Mar 09 16:52:37.514689 master-0 kubenswrapper[32968]: I0309 16:52:37.514581 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4ee901e15ed65fb7aa5785ec8ec0563e" (UID: "4ee901e15ed65fb7aa5785ec8ec0563e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:52:37.514689 master-0 kubenswrapper[32968]: I0309 16:52:37.514592 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4ee901e15ed65fb7aa5785ec8ec0563e" (UID: "4ee901e15ed65fb7aa5785ec8ec0563e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:52:37.516101 master-0 kubenswrapper[32968]: I0309 16:52:37.516068 32968 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.516197 master-0 kubenswrapper[32968]: I0309 16:52:37.516105 32968 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.516197 master-0 kubenswrapper[32968]: I0309 16:52:37.516127 32968 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.516197 master-0 kubenswrapper[32968]: I0309 16:52:37.516142 32968 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.516197 master-0 kubenswrapper[32968]: I0309 16:52:37.516156 32968 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4ee901e15ed65fb7aa5785ec8ec0563e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.516937 master-0 kubenswrapper[32968]: I0309 16:52:37.516879 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:52:37.517793 master-0 kubenswrapper[32968]: I0309 16:52:37.517758 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:52:37.518016 master-0 kubenswrapper[32968]: I0309 16:52:37.517971 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w" (OuterVolumeSpecName: "kube-api-access-h8p7w") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "kube-api-access-h8p7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:52:37.518411 master-0 kubenswrapper[32968]: I0309 16:52:37.518356 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" (UID: "ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:52:37.617288 master-0 kubenswrapper[32968]: I0309 16:52:37.617161 32968 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.617288 master-0 kubenswrapper[32968]: I0309 16:52:37.617249 32968 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.617288 master-0 kubenswrapper[32968]: I0309 16:52:37.617269 32968 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.617288 master-0 kubenswrapper[32968]: I0309 16:52:37.617281 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8p7w\" (UniqueName: \"kubernetes.io/projected/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b-kube-api-access-h8p7w\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:37.832722 master-0 kubenswrapper[32968]: I0309 16:52:37.832652 32968 generic.go:334] "Generic (PLEG): container finished" podID="2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" containerID="3a2db81abfd908cab919c9e6212fbb15f49f0fe26219c0b3f43add9a1be6e0e8" exitCode=0 Mar 09 16:52:37.833521 master-0 kubenswrapper[32968]: I0309 16:52:37.832742 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c","Type":"ContainerDied","Data":"3a2db81abfd908cab919c9e6212fbb15f49f0fe26219c0b3f43add9a1be6e0e8"} Mar 09 16:52:37.836741 master-0 kubenswrapper[32968]: I0309 16:52:37.836661 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager-cert-syncer/0.log" Mar 09 16:52:37.838834 master-0 kubenswrapper[32968]: I0309 16:52:37.838730 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager/0.log" Mar 09 16:52:37.838834 master-0 kubenswrapper[32968]: I0309 16:52:37.838804 32968 generic.go:334] "Generic (PLEG): container finished" podID="4ee901e15ed65fb7aa5785ec8ec0563e" containerID="911cba7e1f9cb852c637561f891e3b5a982532d757d88a06ff9aebcbd7c475c2" exitCode=0 Mar 09 16:52:37.838834 master-0 kubenswrapper[32968]: I0309 16:52:37.838829 32968 generic.go:334] "Generic (PLEG): container finished" podID="4ee901e15ed65fb7aa5785ec8ec0563e" containerID="55b9dd03a97a7153346e305d1d756d1e7bf45a58d0547c62d3e8a40594f9dbaa" exitCode=0 Mar 09 16:52:37.838834 master-0 kubenswrapper[32968]: I0309 16:52:37.838837 32968 generic.go:334] "Generic (PLEG): container finished" podID="4ee901e15ed65fb7aa5785ec8ec0563e" containerID="3c38b2115cd52d1efef54c2999128dc674a18b9803bfdcdba9d9e455d6aa049a" exitCode=2 Mar 09 16:52:37.839117 master-0 kubenswrapper[32968]: I0309 16:52:37.838844 32968 generic.go:334] "Generic (PLEG): container finished" podID="4ee901e15ed65fb7aa5785ec8ec0563e" containerID="4cd8903e8e22ba82f42ce990c7d672d208e9b2502ddb3553b9e1798f91e13ece" exitCode=0 Mar 09 16:52:37.839117 master-0 kubenswrapper[32968]: I0309 16:52:37.839013 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:37.839117 master-0 kubenswrapper[32968]: I0309 16:52:37.839035 32968 scope.go:117] "RemoveContainer" containerID="f6a905eaba301188ad44a65faa2e809a7197fca881d55b61c8a9cfed3f77dd08" Mar 09 16:52:37.839317 master-0 kubenswrapper[32968]: I0309 16:52:37.839018 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1309ebab744cfcb402c01aeb84ea34b94907e4c791e16243098f518b5f0360b7" Mar 09 16:52:37.842021 master-0 kubenswrapper[32968]: I0309 16:52:37.841916 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" event={"ID":"ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b","Type":"ContainerDied","Data":"7cbb60752ad730773fcc5f1a03bf60c07289d9baad3097dc05211337bc73fb20"} Mar 09 16:52:37.842021 master-0 kubenswrapper[32968]: I0309 16:52:37.841988 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c4558858-9rclt" Mar 09 16:52:37.916775 master-0 kubenswrapper[32968]: I0309 16:52:37.916713 32968 scope.go:117] "RemoveContainer" containerID="7c7e82a8000eb584fb2d9fc14766cd7c65340bfb72b0d9d1871812e5a7249542" Mar 09 16:52:37.956067 master-0 kubenswrapper[32968]: I0309 16:52:37.955945 32968 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="4ee901e15ed65fb7aa5785ec8ec0563e" podUID="77dc95537478dce4d04a84d6f7508175" Mar 09 16:52:38.005817 master-0 kubenswrapper[32968]: I0309 16:52:38.005721 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7c4558858-9rclt"] Mar 09 16:52:38.045500 master-0 kubenswrapper[32968]: I0309 16:52:38.045402 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-7c4558858-9rclt"] Mar 09 16:52:38.094335 master-0 kubenswrapper[32968]: I0309 16:52:38.094193 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee901e15ed65fb7aa5785ec8ec0563e" path="/var/lib/kubelet/pods/4ee901e15ed65fb7aa5785ec8ec0563e/volumes" Mar 09 16:52:38.095103 master-0 kubenswrapper[32968]: I0309 16:52:38.095067 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" path="/var/lib/kubelet/pods/ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b/volumes" Mar 09 16:52:38.854770 master-0 kubenswrapper[32968]: I0309 16:52:38.854704 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_4ee901e15ed65fb7aa5785ec8ec0563e/kube-controller-manager-cert-syncer/0.log" Mar 09 16:52:39.154784 master-0 kubenswrapper[32968]: I0309 16:52:39.154751 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:39.244388 master-0 kubenswrapper[32968]: I0309 16:52:39.244338 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kubelet-dir\") pod \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " Mar 09 16:52:39.244799 master-0 kubenswrapper[32968]: I0309 16:52:39.244506 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" (UID: "2f32f448-cfdc-4de7-906e-ca2e7bce6c1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:52:39.244921 master-0 kubenswrapper[32968]: I0309 16:52:39.244903 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kube-api-access\") pod \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " Mar 09 16:52:39.245036 master-0 kubenswrapper[32968]: I0309 16:52:39.245020 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-var-lock\") pod \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\" (UID: \"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c\") " Mar 09 16:52:39.245151 master-0 kubenswrapper[32968]: I0309 16:52:39.245094 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" (UID: "2f32f448-cfdc-4de7-906e-ca2e7bce6c1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 09 16:52:39.246165 master-0 kubenswrapper[32968]: I0309 16:52:39.246069 32968 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:39.246165 master-0 kubenswrapper[32968]: I0309 16:52:39.246160 32968 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:39.248488 master-0 kubenswrapper[32968]: I0309 16:52:39.248375 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" (UID: "2f32f448-cfdc-4de7-906e-ca2e7bce6c1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:52:39.348204 master-0 kubenswrapper[32968]: I0309 16:52:39.348108 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2f32f448-cfdc-4de7-906e-ca2e7bce6c1c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 09 16:52:39.865449 master-0 kubenswrapper[32968]: I0309 16:52:39.865365 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2f32f448-cfdc-4de7-906e-ca2e7bce6c1c","Type":"ContainerDied","Data":"ee20bdc5388f90c443885389ea85ed801785cb6ee4093d9066aa6e57b2167bcd"} Mar 09 16:52:39.866129 master-0 kubenswrapper[32968]: I0309 16:52:39.865562 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 09 16:52:39.866214 master-0 kubenswrapper[32968]: I0309 16:52:39.866118 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee20bdc5388f90c443885389ea85ed801785cb6ee4093d9066aa6e57b2167bcd" Mar 09 16:52:50.084116 master-0 kubenswrapper[32968]: I0309 16:52:50.084048 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:50.102089 master-0 kubenswrapper[32968]: I0309 16:52:50.102018 32968 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2848d3f9-3ec2-40ab-ba19-2265b0b93df7" Mar 09 16:52:50.102089 master-0 kubenswrapper[32968]: I0309 16:52:50.102066 32968 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2848d3f9-3ec2-40ab-ba19-2265b0b93df7" Mar 09 16:52:50.121982 master-0 kubenswrapper[32968]: I0309 16:52:50.121932 32968 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:50.122618 master-0 kubenswrapper[32968]: I0309 16:52:50.122588 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:52:50.130565 master-0 kubenswrapper[32968]: I0309 16:52:50.130504 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:52:50.142192 master-0 kubenswrapper[32968]: I0309 16:52:50.142132 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:52:50.153442 master-0 kubenswrapper[32968]: I0309 16:52:50.153372 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 09 16:52:50.956933 master-0 kubenswrapper[32968]: I0309 16:52:50.956850 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"77dc95537478dce4d04a84d6f7508175","Type":"ContainerStarted","Data":"165a88d0e1a30e0e6808e3e8f13cb0afc62773ee65859c40c6fee30fdabb9297"} Mar 09 16:52:50.956933 master-0 kubenswrapper[32968]: I0309 16:52:50.956922 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"77dc95537478dce4d04a84d6f7508175","Type":"ContainerStarted","Data":"8c4cde5f17fb48ed7d0cec7cc4437629de1119a022b73d2e298046fc75c22653"} Mar 09 16:52:50.956933 master-0 kubenswrapper[32968]: I0309 16:52:50.956935 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"77dc95537478dce4d04a84d6f7508175","Type":"ContainerStarted","Data":"4ca9896d5427e139d1af45d5a312d6ec5d6a0555456893a418ef78810aa46d6b"} Mar 09 16:52:50.956933 master-0 kubenswrapper[32968]: I0309 16:52:50.956944 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"77dc95537478dce4d04a84d6f7508175","Type":"ContainerStarted","Data":"ddcaa8d6058b8663ea0840693664df0e2ed0e89d87c61b283f03d5aae5eaf16c"} Mar 09 16:52:51.967810 master-0 kubenswrapper[32968]: I0309 16:52:51.967738 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"77dc95537478dce4d04a84d6f7508175","Type":"ContainerStarted","Data":"422b9f10884b7a72807000f21736a7a67a0b7ca9324c006a787d392c8a439353"} Mar 09 16:53:00.142807 master-0 kubenswrapper[32968]: I0309 16:53:00.142666 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:00.142807 master-0 kubenswrapper[32968]: I0309 16:53:00.142736 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:00.142807 master-0 kubenswrapper[32968]: I0309 16:53:00.142752 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:00.142807 master-0 kubenswrapper[32968]: I0309 16:53:00.142764 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:00.143581 master-0 kubenswrapper[32968]: I0309 16:53:00.143183 32968 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 09 16:53:00.143581 master-0 kubenswrapper[32968]: I0309 16:53:00.143324 32968 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="77dc95537478dce4d04a84d6f7508175" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 09 16:53:00.147208 master-0 kubenswrapper[32968]: I0309 16:53:00.147092 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:00.175804 master-0 kubenswrapper[32968]: I0309 16:53:00.175682 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=10.175645642 podStartE2EDuration="10.175645642s" podCreationTimestamp="2026-03-09 16:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:52:52.002284763 +0000 UTC m=+398.105607323" watchObservedRunningTime="2026-03-09 16:53:00.175645642 +0000 UTC m=+406.278968182" Mar 09 16:53:01.043881 master-0 kubenswrapper[32968]: I0309 16:53:01.043672 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:10.146351 master-0 kubenswrapper[32968]: I0309 16:53:10.146279 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:10.150175 master-0 kubenswrapper[32968]: I0309 16:53:10.150137 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 09 16:53:24.830346 master-0 kubenswrapper[32968]: I0309 16:53:24.830265 32968 scope.go:117] "RemoveContainer" containerID="55b9dd03a97a7153346e305d1d756d1e7bf45a58d0547c62d3e8a40594f9dbaa" Mar 09 16:53:24.852941 master-0 kubenswrapper[32968]: I0309 16:53:24.852882 32968 scope.go:117] "RemoveContainer" containerID="4cd8903e8e22ba82f42ce990c7d672d208e9b2502ddb3553b9e1798f91e13ece" Mar 09 16:53:24.874736 master-0 kubenswrapper[32968]: I0309 16:53:24.874671 32968 scope.go:117] "RemoveContainer" containerID="3c38b2115cd52d1efef54c2999128dc674a18b9803bfdcdba9d9e455d6aa049a" Mar 09 16:53:27.140304 master-0 kubenswrapper[32968]: I0309 16:53:27.140191 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-74697d479c-4z799"] Mar 09 16:53:27.140963 master-0 kubenswrapper[32968]: E0309 16:53:27.140721 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" containerName="metrics-server" Mar 09 16:53:27.140963 master-0 kubenswrapper[32968]: I0309 16:53:27.140746 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" containerName="metrics-server" Mar 09 16:53:27.140963 master-0 kubenswrapper[32968]: E0309 16:53:27.140803 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" containerName="installer" Mar 09 16:53:27.140963 master-0 kubenswrapper[32968]: I0309 16:53:27.140812 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" containerName="installer" Mar 09 16:53:27.141141 master-0 kubenswrapper[32968]: I0309 16:53:27.141022 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f32f448-cfdc-4de7-906e-ca2e7bce6c1c" containerName="installer" Mar 09 16:53:27.141141 master-0 kubenswrapper[32968]: I0309 16:53:27.141059 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf3a18d-eccb-4c92-bc2f-f3b85d2c219b" containerName="metrics-server" Mar 09 16:53:27.141920 master-0 kubenswrapper[32968]: I0309 16:53:27.141865 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.160591 master-0 kubenswrapper[32968]: I0309 16:53:27.160532 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-74697d479c-4z799"] Mar 09 16:53:27.322318 master-0 kubenswrapper[32968]: I0309 16:53:27.322205 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tkz\" (UniqueName: \"kubernetes.io/projected/a086cb7e-b45f-40db-896c-620aaf9d805c-kube-api-access-z6tkz\") pod \"nova-console-poller-74697d479c-4z799\" (UID: \"a086cb7e-b45f-40db-896c-620aaf9d805c\") " pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.322858 master-0 kubenswrapper[32968]: I0309 16:53:27.322776 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a086cb7e-b45f-40db-896c-620aaf9d805c-os-client-config\") pod \"nova-console-poller-74697d479c-4z799\" (UID: \"a086cb7e-b45f-40db-896c-620aaf9d805c\") " pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.425180 master-0 kubenswrapper[32968]: I0309 16:53:27.425003 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a086cb7e-b45f-40db-896c-620aaf9d805c-os-client-config\") pod \"nova-console-poller-74697d479c-4z799\" (UID: \"a086cb7e-b45f-40db-896c-620aaf9d805c\") " pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.425524 master-0 kubenswrapper[32968]: I0309 16:53:27.425239 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tkz\" (UniqueName: \"kubernetes.io/projected/a086cb7e-b45f-40db-896c-620aaf9d805c-kube-api-access-z6tkz\") pod \"nova-console-poller-74697d479c-4z799\" (UID: \"a086cb7e-b45f-40db-896c-620aaf9d805c\") " pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.430297 master-0 kubenswrapper[32968]: I0309 16:53:27.430243 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/a086cb7e-b45f-40db-896c-620aaf9d805c-os-client-config\") pod \"nova-console-poller-74697d479c-4z799\" (UID: \"a086cb7e-b45f-40db-896c-620aaf9d805c\") " pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.445516 master-0 kubenswrapper[32968]: I0309 16:53:27.445413 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tkz\" (UniqueName: \"kubernetes.io/projected/a086cb7e-b45f-40db-896c-620aaf9d805c-kube-api-access-z6tkz\") pod \"nova-console-poller-74697d479c-4z799\" (UID: \"a086cb7e-b45f-40db-896c-620aaf9d805c\") " pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.460473 master-0 kubenswrapper[32968]: I0309 16:53:27.460374 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-74697d479c-4z799" Mar 09 16:53:27.891742 master-0 kubenswrapper[32968]: I0309 16:53:27.891667 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-74697d479c-4z799"] Mar 09 16:53:27.893374 master-0 kubenswrapper[32968]: W0309 16:53:27.893326 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda086cb7e_b45f_40db_896c_620aaf9d805c.slice/crio-c7fbff9940f720ebb5ea2da5d350e2b1f0a431a1e9405d6adbaa1b0e2ebf5fa7 WatchSource:0}: Error finding container c7fbff9940f720ebb5ea2da5d350e2b1f0a431a1e9405d6adbaa1b0e2ebf5fa7: Status 404 returned error can't find the container with id c7fbff9940f720ebb5ea2da5d350e2b1f0a431a1e9405d6adbaa1b0e2ebf5fa7 Mar 09 16:53:28.401406 master-0 kubenswrapper[32968]: I0309 16:53:28.401315 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-74697d479c-4z799" event={"ID":"a086cb7e-b45f-40db-896c-620aaf9d805c","Type":"ContainerStarted","Data":"c7fbff9940f720ebb5ea2da5d350e2b1f0a431a1e9405d6adbaa1b0e2ebf5fa7"} Mar 09 16:53:34.456931 master-0 kubenswrapper[32968]: I0309 16:53:34.456856 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-74697d479c-4z799" event={"ID":"a086cb7e-b45f-40db-896c-620aaf9d805c","Type":"ContainerStarted","Data":"23ad2e7f42d0c28b655e8d07c79007f622d892739c7bbb604e4d9ce23aee123f"} Mar 09 16:53:34.481499 master-0 kubenswrapper[32968]: I0309 16:53:34.481373 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-74697d479c-4z799" podStartSLOduration=1.8675543380000001 podStartE2EDuration="7.481341204s" podCreationTimestamp="2026-03-09 16:53:27 +0000 UTC" firstStartedPulling="2026-03-09 16:53:27.89612462 +0000 UTC m=+433.999447160" lastFinishedPulling="2026-03-09 16:53:33.509911486 +0000 UTC m=+439.613234026" observedRunningTime="2026-03-09 16:53:34.479159604 +0000 UTC m=+440.582482144" watchObservedRunningTime="2026-03-09 16:53:34.481341204 +0000 UTC m=+440.584663744" Mar 09 16:53:40.964175 master-0 kubenswrapper[32968]: I0309 16:53:40.964065 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76b574c97f-rcllb"] Mar 09 16:53:40.965582 master-0 kubenswrapper[32968]: I0309 16:53:40.965546 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.012942 master-0 kubenswrapper[32968]: I0309 16:53:41.012849 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76b574c97f-rcllb"] Mar 09 16:53:41.083261 master-0 kubenswrapper[32968]: I0309 16:53:41.083167 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-oauth-serving-cert\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.083579 master-0 kubenswrapper[32968]: I0309 16:53:41.083278 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-config\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.083579 master-0 kubenswrapper[32968]: I0309 16:53:41.083311 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-oauth-config\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.083579 master-0 kubenswrapper[32968]: I0309 16:53:41.083341 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-serving-cert\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.083579 master-0 kubenswrapper[32968]: I0309 16:53:41.083360 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-service-ca\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.083579 master-0 kubenswrapper[32968]: I0309 16:53:41.083378 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6m6\" (UniqueName: \"kubernetes.io/projected/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-kube-api-access-sn6m6\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.083579 master-0 kubenswrapper[32968]: I0309 16:53:41.083411 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-trusted-ca-bundle\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.185911 master-0 kubenswrapper[32968]: I0309 16:53:41.185837 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-oauth-serving-cert\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.186232 master-0 kubenswrapper[32968]: I0309 16:53:41.185989 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-config\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.186639 master-0 kubenswrapper[32968]: I0309 16:53:41.186437 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-oauth-config\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.186639 master-0 kubenswrapper[32968]: I0309 16:53:41.186595 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-serving-cert\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.186851 master-0 kubenswrapper[32968]: I0309 16:53:41.186796 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-service-ca\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.187158 master-0 kubenswrapper[32968]: I0309 16:53:41.187123 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-oauth-serving-cert\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.187585 master-0 kubenswrapper[32968]: I0309 16:53:41.187384 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6m6\" (UniqueName: \"kubernetes.io/projected/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-kube-api-access-sn6m6\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.187585 master-0 kubenswrapper[32968]: I0309 16:53:41.187472 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-trusted-ca-bundle\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.188141 master-0 kubenswrapper[32968]: I0309 16:53:41.187872 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-config\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.188141 master-0 kubenswrapper[32968]: I0309 16:53:41.188114 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-service-ca\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.191558 master-0 kubenswrapper[32968]: I0309 16:53:41.189077 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-trusted-ca-bundle\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.191558 master-0 kubenswrapper[32968]: I0309 16:53:41.191439 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-serving-cert\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.200773 master-0 kubenswrapper[32968]: I0309 16:53:41.192126 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-oauth-config\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.232158 master-0 kubenswrapper[32968]: I0309 16:53:41.231955 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6m6\" (UniqueName: \"kubernetes.io/projected/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-kube-api-access-sn6m6\") pod \"console-76b574c97f-rcllb\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.284225 master-0 kubenswrapper[32968]: I0309 16:53:41.284161 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:41.695578 master-0 kubenswrapper[32968]: I0309 16:53:41.695487 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76b574c97f-rcllb"] Mar 09 16:53:41.699867 master-0 kubenswrapper[32968]: W0309 16:53:41.699788 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb29b950c_6b0f_4d86_a05f_9af9af5ebb82.slice/crio-083da966e25747ea49b0e8c7f67f902224aae31bfb116321cce8c69ad2333750 WatchSource:0}: Error finding container 083da966e25747ea49b0e8c7f67f902224aae31bfb116321cce8c69ad2333750: Status 404 returned error can't find the container with id 083da966e25747ea49b0e8c7f67f902224aae31bfb116321cce8c69ad2333750 Mar 09 16:53:42.526938 master-0 kubenswrapper[32968]: I0309 16:53:42.526583 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b574c97f-rcllb" event={"ID":"b29b950c-6b0f-4d86-a05f-9af9af5ebb82","Type":"ContainerStarted","Data":"d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43"} Mar 09 16:53:42.526938 master-0 kubenswrapper[32968]: I0309 16:53:42.526714 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b574c97f-rcllb" event={"ID":"b29b950c-6b0f-4d86-a05f-9af9af5ebb82","Type":"ContainerStarted","Data":"083da966e25747ea49b0e8c7f67f902224aae31bfb116321cce8c69ad2333750"} Mar 09 16:53:42.554463 master-0 kubenswrapper[32968]: I0309 16:53:42.554317 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76b574c97f-rcllb" podStartSLOduration=2.554259949 podStartE2EDuration="2.554259949s" podCreationTimestamp="2026-03-09 16:53:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:53:42.548156501 +0000 UTC m=+448.651479041" watchObservedRunningTime="2026-03-09 16:53:42.554259949 +0000 UTC m=+448.657582489" Mar 09 16:53:51.284863 master-0 kubenswrapper[32968]: I0309 16:53:51.284776 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:51.284863 master-0 kubenswrapper[32968]: I0309 16:53:51.284837 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:51.290345 master-0 kubenswrapper[32968]: I0309 16:53:51.290266 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:51.610096 master-0 kubenswrapper[32968]: I0309 16:53:51.609854 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:53:51.689464 master-0 kubenswrapper[32968]: I0309 16:53:51.688699 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c86f77cc4-dksdp"] Mar 09 16:53:59.928322 master-0 kubenswrapper[32968]: I0309 16:53:59.928253 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-5dcc544457-5dgdt"] Mar 09 16:53:59.929505 master-0 kubenswrapper[32968]: I0309 16:53:59.929482 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:53:59.939972 master-0 kubenswrapper[32968]: I0309 16:53:59.939901 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5dcc544457-5dgdt"] Mar 09 16:54:00.116820 master-0 kubenswrapper[32968]: I0309 16:54:00.116767 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqv4c\" (UniqueName: \"kubernetes.io/projected/ad1a44ce-e044-436b-867c-9c48bae42e45-kube-api-access-tqv4c\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.117100 master-0 kubenswrapper[32968]: I0309 16:54:00.117086 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ad1a44ce-e044-436b-867c-9c48bae42e45-os-client-config\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.117221 master-0 kubenswrapper[32968]: I0309 16:54:00.117200 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/ad1a44ce-e044-436b-867c-9c48bae42e45-nova-console-recordings-pv\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.219121 master-0 kubenswrapper[32968]: I0309 16:54:00.218991 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ad1a44ce-e044-436b-867c-9c48bae42e45-os-client-config\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.219542 master-0 kubenswrapper[32968]: I0309 16:54:00.219518 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqv4c\" (UniqueName: \"kubernetes.io/projected/ad1a44ce-e044-436b-867c-9c48bae42e45-kube-api-access-tqv4c\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.219698 master-0 kubenswrapper[32968]: I0309 16:54:00.219668 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/ad1a44ce-e044-436b-867c-9c48bae42e45-nova-console-recordings-pv\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.223516 master-0 kubenswrapper[32968]: I0309 16:54:00.223478 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ad1a44ce-e044-436b-867c-9c48bae42e45-os-client-config\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.236821 master-0 kubenswrapper[32968]: I0309 16:54:00.236784 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqv4c\" (UniqueName: \"kubernetes.io/projected/ad1a44ce-e044-436b-867c-9c48bae42e45-kube-api-access-tqv4c\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.843353 master-0 kubenswrapper[32968]: I0309 16:54:00.843307 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/ad1a44ce-e044-436b-867c-9c48bae42e45-nova-console-recordings-pv\") pod \"nova-console-recorder-5dcc544457-5dgdt\" (UID: \"ad1a44ce-e044-436b-867c-9c48bae42e45\") " pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:00.851020 master-0 kubenswrapper[32968]: I0309 16:54:00.850963 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" Mar 09 16:54:01.279166 master-0 kubenswrapper[32968]: I0309 16:54:01.279054 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5dcc544457-5dgdt"] Mar 09 16:54:01.280280 master-0 kubenswrapper[32968]: W0309 16:54:01.280234 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad1a44ce_e044_436b_867c_9c48bae42e45.slice/crio-d68d01afdd2871ca402bae0c0e38f7e9ae32437b94355a08fbb5135ecb0cabfc WatchSource:0}: Error finding container d68d01afdd2871ca402bae0c0e38f7e9ae32437b94355a08fbb5135ecb0cabfc: Status 404 returned error can't find the container with id d68d01afdd2871ca402bae0c0e38f7e9ae32437b94355a08fbb5135ecb0cabfc Mar 09 16:54:01.698310 master-0 kubenswrapper[32968]: I0309 16:54:01.698210 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" event={"ID":"ad1a44ce-e044-436b-867c-9c48bae42e45","Type":"ContainerStarted","Data":"d68d01afdd2871ca402bae0c0e38f7e9ae32437b94355a08fbb5135ecb0cabfc"} Mar 09 16:54:09.777600 master-0 kubenswrapper[32968]: I0309 16:54:09.777521 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" event={"ID":"ad1a44ce-e044-436b-867c-9c48bae42e45","Type":"ContainerStarted","Data":"2d9bdb9c7fc767b8c92a3207d39a002c6757744d1036e42fc0a5170ce11d9564"} Mar 09 16:54:09.802619 master-0 kubenswrapper[32968]: I0309 16:54:09.802492 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-5dcc544457-5dgdt" podStartSLOduration=3.130245747 podStartE2EDuration="10.802469473s" podCreationTimestamp="2026-03-09 16:53:59 +0000 UTC" firstStartedPulling="2026-03-09 16:54:01.283335769 +0000 UTC m=+467.386658309" lastFinishedPulling="2026-03-09 16:54:08.955559495 +0000 UTC m=+475.058882035" observedRunningTime="2026-03-09 16:54:09.800527639 +0000 UTC m=+475.903850199" watchObservedRunningTime="2026-03-09 16:54:09.802469473 +0000 UTC m=+475.905792013" Mar 09 16:54:16.740683 master-0 kubenswrapper[32968]: I0309 16:54:16.740590 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-c86f77cc4-dksdp" podUID="9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" containerName="console" containerID="cri-o://9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe" gracePeriod=15 Mar 09 16:54:17.142831 master-0 kubenswrapper[32968]: I0309 16:54:17.142747 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c86f77cc4-dksdp_9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b/console/0.log" Mar 09 16:54:17.142831 master-0 kubenswrapper[32968]: I0309 16:54:17.142839 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:54:17.239930 master-0 kubenswrapper[32968]: I0309 16:54:17.239832 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-config\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240448 master-0 kubenswrapper[32968]: I0309 16:54:17.240074 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-service-ca\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240448 master-0 kubenswrapper[32968]: I0309 16:54:17.240122 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-trusted-ca-bundle\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240448 master-0 kubenswrapper[32968]: I0309 16:54:17.240165 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-oauth-serving-cert\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240448 master-0 kubenswrapper[32968]: I0309 16:54:17.240200 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-serving-cert\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240448 master-0 kubenswrapper[32968]: I0309 16:54:17.240303 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-oauth-config\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240448 master-0 kubenswrapper[32968]: I0309 16:54:17.240388 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c2cw\" (UniqueName: \"kubernetes.io/projected/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-kube-api-access-9c2cw\") pod \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\" (UID: \"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b\") " Mar 09 16:54:17.240829 master-0 kubenswrapper[32968]: I0309 16:54:17.240566 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-config" (OuterVolumeSpecName: "console-config") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:54:17.240986 master-0 kubenswrapper[32968]: I0309 16:54:17.240912 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-service-ca" (OuterVolumeSpecName: "service-ca") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:54:17.240986 master-0 kubenswrapper[32968]: I0309 16:54:17.240954 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:54:17.241167 master-0 kubenswrapper[32968]: I0309 16:54:17.241124 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:54:17.241225 master-0 kubenswrapper[32968]: I0309 16:54:17.241191 32968 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.244716 master-0 kubenswrapper[32968]: I0309 16:54:17.244669 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:54:17.244829 master-0 kubenswrapper[32968]: I0309 16:54:17.244712 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-kube-api-access-9c2cw" (OuterVolumeSpecName: "kube-api-access-9c2cw") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "kube-api-access-9c2cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:54:17.245575 master-0 kubenswrapper[32968]: I0309 16:54:17.245505 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" (UID: "9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:54:17.343001 master-0 kubenswrapper[32968]: I0309 16:54:17.342770 32968 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.343001 master-0 kubenswrapper[32968]: I0309 16:54:17.342933 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c2cw\" (UniqueName: \"kubernetes.io/projected/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-kube-api-access-9c2cw\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.343001 master-0 kubenswrapper[32968]: I0309 16:54:17.342954 32968 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.343001 master-0 kubenswrapper[32968]: I0309 16:54:17.343003 32968 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.343527 master-0 kubenswrapper[32968]: I0309 16:54:17.343019 32968 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.343527 master-0 kubenswrapper[32968]: I0309 16:54:17.343033 32968 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:54:17.858605 master-0 kubenswrapper[32968]: I0309 16:54:17.858539 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c86f77cc4-dksdp_9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b/console/0.log" Mar 09 16:54:17.859499 master-0 kubenswrapper[32968]: I0309 16:54:17.858619 32968 generic.go:334] "Generic (PLEG): container finished" podID="9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" containerID="9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe" exitCode=2 Mar 09 16:54:17.859499 master-0 kubenswrapper[32968]: I0309 16:54:17.858666 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c86f77cc4-dksdp" event={"ID":"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b","Type":"ContainerDied","Data":"9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe"} Mar 09 16:54:17.859499 master-0 kubenswrapper[32968]: I0309 16:54:17.858700 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c86f77cc4-dksdp" event={"ID":"9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b","Type":"ContainerDied","Data":"4961cea8312e72699a161f8866e24918d705feedc5c457f1ce7900ee521ccf26"} Mar 09 16:54:17.859499 master-0 kubenswrapper[32968]: I0309 16:54:17.858710 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c86f77cc4-dksdp" Mar 09 16:54:17.859499 master-0 kubenswrapper[32968]: I0309 16:54:17.858721 32968 scope.go:117] "RemoveContainer" containerID="9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe" Mar 09 16:54:17.880525 master-0 kubenswrapper[32968]: I0309 16:54:17.880478 32968 scope.go:117] "RemoveContainer" containerID="9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe" Mar 09 16:54:17.881103 master-0 kubenswrapper[32968]: E0309 16:54:17.881048 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe\": container with ID starting with 9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe not found: ID does not exist" containerID="9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe" Mar 09 16:54:17.881186 master-0 kubenswrapper[32968]: I0309 16:54:17.881099 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe"} err="failed to get container status \"9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe\": rpc error: code = NotFound desc = could not find container \"9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe\": container with ID starting with 9a1e582326347c28258f729d291de17d9ad56dece02baab19361bf7d7c7823fe not found: ID does not exist" Mar 09 16:54:17.903145 master-0 kubenswrapper[32968]: I0309 16:54:17.903044 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c86f77cc4-dksdp"] Mar 09 16:54:17.907516 master-0 kubenswrapper[32968]: I0309 16:54:17.907444 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-c86f77cc4-dksdp"] Mar 09 16:54:18.096633 master-0 kubenswrapper[32968]: I0309 16:54:18.096536 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" path="/var/lib/kubelet/pods/9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b/volumes" Mar 09 16:54:24.917922 master-0 kubenswrapper[32968]: I0309 16:54:24.917870 32968 scope.go:117] "RemoveContainer" containerID="8fab2020ef9b38432e3f16fd30963c59fa955a3ba62df68c3c2ea954609a4fb6" Mar 09 16:55:24.976345 master-0 kubenswrapper[32968]: I0309 16:55:24.976249 32968 scope.go:117] "RemoveContainer" containerID="911cba7e1f9cb852c637561f891e3b5a982532d757d88a06ff9aebcbd7c475c2" Mar 09 16:55:52.628227 master-0 kubenswrapper[32968]: I0309 16:55:52.627849 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd"] Mar 09 16:55:52.629148 master-0 kubenswrapper[32968]: E0309 16:55:52.628442 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" containerName="console" Mar 09 16:55:52.629148 master-0 kubenswrapper[32968]: I0309 16:55:52.628459 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" containerName="console" Mar 09 16:55:52.629148 master-0 kubenswrapper[32968]: I0309 16:55:52.628686 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec7ce2b-51b7-4d03-ab76-a5f7220b8c1b" containerName="console" Mar 09 16:55:52.630364 master-0 kubenswrapper[32968]: I0309 16:55:52.630320 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.634028 master-0 kubenswrapper[32968]: I0309 16:55:52.633960 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w2jx8" Mar 09 16:55:52.646188 master-0 kubenswrapper[32968]: I0309 16:55:52.645974 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd"] Mar 09 16:55:52.682962 master-0 kubenswrapper[32968]: I0309 16:55:52.682867 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.683539 master-0 kubenswrapper[32968]: I0309 16:55:52.683015 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.683539 master-0 kubenswrapper[32968]: I0309 16:55:52.683058 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89cg\" (UniqueName: \"kubernetes.io/projected/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-kube-api-access-s89cg\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.785077 master-0 kubenswrapper[32968]: I0309 16:55:52.784952 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.785367 master-0 kubenswrapper[32968]: I0309 16:55:52.785196 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89cg\" (UniqueName: \"kubernetes.io/projected/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-kube-api-access-s89cg\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.785367 master-0 kubenswrapper[32968]: I0309 16:55:52.785246 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.785730 master-0 kubenswrapper[32968]: I0309 16:55:52.785638 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.786634 master-0 kubenswrapper[32968]: I0309 16:55:52.786581 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.806448 master-0 kubenswrapper[32968]: I0309 16:55:52.806363 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89cg\" (UniqueName: \"kubernetes.io/projected/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-kube-api-access-s89cg\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:52.949886 master-0 kubenswrapper[32968]: I0309 16:55:52.949820 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:53.383567 master-0 kubenswrapper[32968]: I0309 16:55:53.383494 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd"] Mar 09 16:55:53.388079 master-0 kubenswrapper[32968]: W0309 16:55:53.388019 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eb2ce3d_2068_44a9_b748_234a15ea7e1c.slice/crio-4f71c5df21d376875e6a33704520d4222641db2c26d7692a4bc58c9b25212fb4 WatchSource:0}: Error finding container 4f71c5df21d376875e6a33704520d4222641db2c26d7692a4bc58c9b25212fb4: Status 404 returned error can't find the container with id 4f71c5df21d376875e6a33704520d4222641db2c26d7692a4bc58c9b25212fb4 Mar 09 16:55:53.648115 master-0 kubenswrapper[32968]: I0309 16:55:53.647958 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" event={"ID":"3eb2ce3d-2068-44a9-b748-234a15ea7e1c","Type":"ContainerStarted","Data":"dc64526b0f0b6f146c90e2b542c5adc3232f24e2b1d4e1f562e485f14ed24bce"} Mar 09 16:55:53.648115 master-0 kubenswrapper[32968]: I0309 16:55:53.648013 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" event={"ID":"3eb2ce3d-2068-44a9-b748-234a15ea7e1c","Type":"ContainerStarted","Data":"4f71c5df21d376875e6a33704520d4222641db2c26d7692a4bc58c9b25212fb4"} Mar 09 16:55:54.659225 master-0 kubenswrapper[32968]: I0309 16:55:54.659122 32968 generic.go:334] "Generic (PLEG): container finished" podID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerID="dc64526b0f0b6f146c90e2b542c5adc3232f24e2b1d4e1f562e485f14ed24bce" exitCode=0 Mar 09 16:55:54.660162 master-0 kubenswrapper[32968]: I0309 16:55:54.659243 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" event={"ID":"3eb2ce3d-2068-44a9-b748-234a15ea7e1c","Type":"ContainerDied","Data":"dc64526b0f0b6f146c90e2b542c5adc3232f24e2b1d4e1f562e485f14ed24bce"} Mar 09 16:55:56.681001 master-0 kubenswrapper[32968]: I0309 16:55:56.680877 32968 generic.go:334] "Generic (PLEG): container finished" podID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerID="01eac6327314eae44754e8538b7b1bd3d5946b33c0515e6aaf8c23e13139ebf4" exitCode=0 Mar 09 16:55:56.681001 master-0 kubenswrapper[32968]: I0309 16:55:56.680976 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" event={"ID":"3eb2ce3d-2068-44a9-b748-234a15ea7e1c","Type":"ContainerDied","Data":"01eac6327314eae44754e8538b7b1bd3d5946b33c0515e6aaf8c23e13139ebf4"} Mar 09 16:55:57.693588 master-0 kubenswrapper[32968]: I0309 16:55:57.693485 32968 generic.go:334] "Generic (PLEG): container finished" podID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerID="b58311513fe83c3662ec14448bef8d6bb1bd12e3466fcaaec47123ce2e1157b6" exitCode=0 Mar 09 16:55:57.693588 master-0 kubenswrapper[32968]: I0309 16:55:57.693577 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" event={"ID":"3eb2ce3d-2068-44a9-b748-234a15ea7e1c","Type":"ContainerDied","Data":"b58311513fe83c3662ec14448bef8d6bb1bd12e3466fcaaec47123ce2e1157b6"} Mar 09 16:55:59.023266 master-0 kubenswrapper[32968]: I0309 16:55:59.023180 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:55:59.103602 master-0 kubenswrapper[32968]: I0309 16:55:59.103514 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-bundle\") pod \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " Mar 09 16:55:59.104087 master-0 kubenswrapper[32968]: I0309 16:55:59.103672 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89cg\" (UniqueName: \"kubernetes.io/projected/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-kube-api-access-s89cg\") pod \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " Mar 09 16:55:59.104087 master-0 kubenswrapper[32968]: I0309 16:55:59.103757 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-util\") pod \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\" (UID: \"3eb2ce3d-2068-44a9-b748-234a15ea7e1c\") " Mar 09 16:55:59.104682 master-0 kubenswrapper[32968]: I0309 16:55:59.104547 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-bundle" (OuterVolumeSpecName: "bundle") pod "3eb2ce3d-2068-44a9-b748-234a15ea7e1c" (UID: "3eb2ce3d-2068-44a9-b748-234a15ea7e1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:55:59.105276 master-0 kubenswrapper[32968]: I0309 16:55:59.105242 32968 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:55:59.107855 master-0 kubenswrapper[32968]: I0309 16:55:59.107768 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-kube-api-access-s89cg" (OuterVolumeSpecName: "kube-api-access-s89cg") pod "3eb2ce3d-2068-44a9-b748-234a15ea7e1c" (UID: "3eb2ce3d-2068-44a9-b748-234a15ea7e1c"). InnerVolumeSpecName "kube-api-access-s89cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:55:59.116397 master-0 kubenswrapper[32968]: I0309 16:55:59.116312 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-util" (OuterVolumeSpecName: "util") pod "3eb2ce3d-2068-44a9-b748-234a15ea7e1c" (UID: "3eb2ce3d-2068-44a9-b748-234a15ea7e1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:55:59.207333 master-0 kubenswrapper[32968]: I0309 16:55:59.207222 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89cg\" (UniqueName: \"kubernetes.io/projected/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-kube-api-access-s89cg\") on node \"master-0\" DevicePath \"\"" Mar 09 16:55:59.207333 master-0 kubenswrapper[32968]: I0309 16:55:59.207297 32968 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2ce3d-2068-44a9-b748-234a15ea7e1c-util\") on node \"master-0\" DevicePath \"\"" Mar 09 16:55:59.713126 master-0 kubenswrapper[32968]: I0309 16:55:59.713055 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" event={"ID":"3eb2ce3d-2068-44a9-b748-234a15ea7e1c","Type":"ContainerDied","Data":"4f71c5df21d376875e6a33704520d4222641db2c26d7692a4bc58c9b25212fb4"} Mar 09 16:55:59.713500 master-0 kubenswrapper[32968]: I0309 16:55:59.713478 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f71c5df21d376875e6a33704520d4222641db2c26d7692a4bc58c9b25212fb4" Mar 09 16:55:59.713816 master-0 kubenswrapper[32968]: I0309 16:55:59.713120 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4glgjd" Mar 09 16:56:06.237347 master-0 kubenswrapper[32968]: I0309 16:56:06.237255 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-75dd96bc9b-rxg2h"] Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: E0309 16:56:06.237692 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="util" Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: I0309 16:56:06.237706 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="util" Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: E0309 16:56:06.237732 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="extract" Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: I0309 16:56:06.237739 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="extract" Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: E0309 16:56:06.237763 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="pull" Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: I0309 16:56:06.237770 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="pull" Mar 09 16:56:06.238285 master-0 kubenswrapper[32968]: I0309 16:56:06.237933 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2ce3d-2068-44a9-b748-234a15ea7e1c" containerName="extract" Mar 09 16:56:06.239484 master-0 kubenswrapper[32968]: I0309 16:56:06.239457 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.242778 master-0 kubenswrapper[32968]: I0309 16:56:06.242718 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 09 16:56:06.242954 master-0 kubenswrapper[32968]: I0309 16:56:06.242884 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 09 16:56:06.243016 master-0 kubenswrapper[32968]: I0309 16:56:06.242744 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 09 16:56:06.243302 master-0 kubenswrapper[32968]: I0309 16:56:06.243268 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 09 16:56:06.243385 master-0 kubenswrapper[32968]: I0309 16:56:06.243357 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 09 16:56:06.267271 master-0 kubenswrapper[32968]: I0309 16:56:06.267159 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-75dd96bc9b-rxg2h"] Mar 09 16:56:06.360024 master-0 kubenswrapper[32968]: I0309 16:56:06.359940 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jgb\" (UniqueName: \"kubernetes.io/projected/9933d69c-eca2-4899-a315-d0d67bd19424-kube-api-access-s9jgb\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.360546 master-0 kubenswrapper[32968]: I0309 16:56:06.360518 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-webhook-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.361038 master-0 kubenswrapper[32968]: I0309 16:56:06.360956 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-apiservice-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.361381 master-0 kubenswrapper[32968]: I0309 16:56:06.361338 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9933d69c-eca2-4899-a315-d0d67bd19424-socket-dir\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.361600 master-0 kubenswrapper[32968]: I0309 16:56:06.361565 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-metrics-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.463354 master-0 kubenswrapper[32968]: I0309 16:56:06.463303 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9933d69c-eca2-4899-a315-d0d67bd19424-socket-dir\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.463676 master-0 kubenswrapper[32968]: I0309 16:56:06.463655 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-metrics-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.463960 master-0 kubenswrapper[32968]: I0309 16:56:06.463934 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9jgb\" (UniqueName: \"kubernetes.io/projected/9933d69c-eca2-4899-a315-d0d67bd19424-kube-api-access-s9jgb\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.464106 master-0 kubenswrapper[32968]: I0309 16:56:06.464081 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-webhook-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.464243 master-0 kubenswrapper[32968]: I0309 16:56:06.464229 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-apiservice-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.464377 master-0 kubenswrapper[32968]: I0309 16:56:06.464246 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9933d69c-eca2-4899-a315-d0d67bd19424-socket-dir\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.468356 master-0 kubenswrapper[32968]: I0309 16:56:06.468306 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-apiservice-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.468571 master-0 kubenswrapper[32968]: I0309 16:56:06.468518 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-metrics-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.468675 master-0 kubenswrapper[32968]: I0309 16:56:06.468617 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9933d69c-eca2-4899-a315-d0d67bd19424-webhook-cert\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.484119 master-0 kubenswrapper[32968]: I0309 16:56:06.484053 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9jgb\" (UniqueName: \"kubernetes.io/projected/9933d69c-eca2-4899-a315-d0d67bd19424-kube-api-access-s9jgb\") pod \"lvms-operator-75dd96bc9b-rxg2h\" (UID: \"9933d69c-eca2-4899-a315-d0d67bd19424\") " pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:06.561200 master-0 kubenswrapper[32968]: I0309 16:56:06.561002 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:07.004992 master-0 kubenswrapper[32968]: I0309 16:56:07.004919 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-75dd96bc9b-rxg2h"] Mar 09 16:56:07.007840 master-0 kubenswrapper[32968]: W0309 16:56:07.007785 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9933d69c_eca2_4899_a315_d0d67bd19424.slice/crio-140f8be4ad27202d999fd29350efb3551208942c8f8c358ae9b982d5a7291335 WatchSource:0}: Error finding container 140f8be4ad27202d999fd29350efb3551208942c8f8c358ae9b982d5a7291335: Status 404 returned error can't find the container with id 140f8be4ad27202d999fd29350efb3551208942c8f8c358ae9b982d5a7291335 Mar 09 16:56:07.792994 master-0 kubenswrapper[32968]: I0309 16:56:07.792916 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" event={"ID":"9933d69c-eca2-4899-a315-d0d67bd19424","Type":"ContainerStarted","Data":"140f8be4ad27202d999fd29350efb3551208942c8f8c358ae9b982d5a7291335"} Mar 09 16:56:12.850468 master-0 kubenswrapper[32968]: I0309 16:56:12.850385 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" event={"ID":"9933d69c-eca2-4899-a315-d0d67bd19424","Type":"ContainerStarted","Data":"1f58b7a0488a6f15ad18f6a397ed6a37c987d9fe7cb5cb42d2c90d4a72dffa04"} Mar 09 16:56:12.851169 master-0 kubenswrapper[32968]: I0309 16:56:12.851150 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:12.870310 master-0 kubenswrapper[32968]: I0309 16:56:12.870224 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" podStartSLOduration=1.618266149 podStartE2EDuration="6.870203577s" podCreationTimestamp="2026-03-09 16:56:06 +0000 UTC" firstStartedPulling="2026-03-09 16:56:07.010410365 +0000 UTC m=+593.113732895" lastFinishedPulling="2026-03-09 16:56:12.262347793 +0000 UTC m=+598.365670323" observedRunningTime="2026-03-09 16:56:12.869452007 +0000 UTC m=+598.972774547" watchObservedRunningTime="2026-03-09 16:56:12.870203577 +0000 UTC m=+598.973526117" Mar 09 16:56:13.865866 master-0 kubenswrapper[32968]: I0309 16:56:13.865801 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-75dd96bc9b-rxg2h" Mar 09 16:56:16.997220 master-0 kubenswrapper[32968]: I0309 16:56:16.997154 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6"] Mar 09 16:56:16.998651 master-0 kubenswrapper[32968]: I0309 16:56:16.998624 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.002009 master-0 kubenswrapper[32968]: I0309 16:56:17.001941 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w2jx8" Mar 09 16:56:17.014883 master-0 kubenswrapper[32968]: I0309 16:56:17.014798 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6"] Mar 09 16:56:17.057659 master-0 kubenswrapper[32968]: I0309 16:56:17.057591 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.057908 master-0 kubenswrapper[32968]: I0309 16:56:17.057806 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmk6\" (UniqueName: \"kubernetes.io/projected/f80073a5-86c9-4ba8-8c4d-c30572de40f3-kube-api-access-rwmk6\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.058287 master-0 kubenswrapper[32968]: I0309 16:56:17.058219 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.159851 master-0 kubenswrapper[32968]: I0309 16:56:17.159793 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.160072 master-0 kubenswrapper[32968]: I0309 16:56:17.159955 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.160072 master-0 kubenswrapper[32968]: I0309 16:56:17.160022 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmk6\" (UniqueName: \"kubernetes.io/projected/f80073a5-86c9-4ba8-8c4d-c30572de40f3-kube-api-access-rwmk6\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.160403 master-0 kubenswrapper[32968]: I0309 16:56:17.160358 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.160651 master-0 kubenswrapper[32968]: I0309 16:56:17.160618 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.176487 master-0 kubenswrapper[32968]: I0309 16:56:17.176402 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmk6\" (UniqueName: \"kubernetes.io/projected/f80073a5-86c9-4ba8-8c4d-c30572de40f3-kube-api-access-rwmk6\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.316132 master-0 kubenswrapper[32968]: I0309 16:56:17.315966 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:17.376856 master-0 kubenswrapper[32968]: I0309 16:56:17.376791 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s"] Mar 09 16:56:17.378644 master-0 kubenswrapper[32968]: I0309 16:56:17.378610 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.389221 master-0 kubenswrapper[32968]: I0309 16:56:17.389157 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s"] Mar 09 16:56:17.470389 master-0 kubenswrapper[32968]: I0309 16:56:17.470267 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.470734 master-0 kubenswrapper[32968]: I0309 16:56:17.470447 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72r5\" (UniqueName: \"kubernetes.io/projected/f9c98f5f-3b96-49b4-8998-2cca02af4736-kube-api-access-h72r5\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.470734 master-0 kubenswrapper[32968]: I0309 16:56:17.470615 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.573718 master-0 kubenswrapper[32968]: I0309 16:56:17.572877 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.574024 master-0 kubenswrapper[32968]: I0309 16:56:17.573723 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.574146 master-0 kubenswrapper[32968]: I0309 16:56:17.574043 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72r5\" (UniqueName: \"kubernetes.io/projected/f9c98f5f-3b96-49b4-8998-2cca02af4736-kube-api-access-h72r5\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.574975 master-0 kubenswrapper[32968]: I0309 16:56:17.574588 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.574975 master-0 kubenswrapper[32968]: I0309 16:56:17.574970 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.591308 master-0 kubenswrapper[32968]: I0309 16:56:17.591249 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72r5\" (UniqueName: \"kubernetes.io/projected/f9c98f5f-3b96-49b4-8998-2cca02af4736-kube-api-access-h72r5\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.724516 master-0 kubenswrapper[32968]: I0309 16:56:17.724432 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:17.769949 master-0 kubenswrapper[32968]: I0309 16:56:17.769863 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6"] Mar 09 16:56:17.772795 master-0 kubenswrapper[32968]: W0309 16:56:17.772288 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80073a5_86c9_4ba8_8c4d_c30572de40f3.slice/crio-7081352efc1038ad35c9cfd268e9227e91f1dab6ffbec5921afd025fae1a77da WatchSource:0}: Error finding container 7081352efc1038ad35c9cfd268e9227e91f1dab6ffbec5921afd025fae1a77da: Status 404 returned error can't find the container with id 7081352efc1038ad35c9cfd268e9227e91f1dab6ffbec5921afd025fae1a77da Mar 09 16:56:17.897362 master-0 kubenswrapper[32968]: I0309 16:56:17.896346 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" event={"ID":"f80073a5-86c9-4ba8-8c4d-c30572de40f3","Type":"ContainerStarted","Data":"7081352efc1038ad35c9cfd268e9227e91f1dab6ffbec5921afd025fae1a77da"} Mar 09 16:56:18.156793 master-0 kubenswrapper[32968]: I0309 16:56:18.156732 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s"] Mar 09 16:56:18.159681 master-0 kubenswrapper[32968]: W0309 16:56:18.159612 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9c98f5f_3b96_49b4_8998_2cca02af4736.slice/crio-22faaa3cf46fbaf9a98e4f67e24c6332e0f7ac94c663c62712377335df5ec38e WatchSource:0}: Error finding container 22faaa3cf46fbaf9a98e4f67e24c6332e0f7ac94c663c62712377335df5ec38e: Status 404 returned error can't find the container with id 22faaa3cf46fbaf9a98e4f67e24c6332e0f7ac94c663c62712377335df5ec38e Mar 09 16:56:18.783227 master-0 kubenswrapper[32968]: I0309 16:56:18.783134 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g"] Mar 09 16:56:18.785409 master-0 kubenswrapper[32968]: I0309 16:56:18.785180 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.800770 master-0 kubenswrapper[32968]: I0309 16:56:18.800719 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g"] Mar 09 16:56:18.895109 master-0 kubenswrapper[32968]: I0309 16:56:18.895012 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.895685 master-0 kubenswrapper[32968]: I0309 16:56:18.895248 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r548\" (UniqueName: \"kubernetes.io/projected/917a8eab-f666-4112-9685-2f971b873813-kube-api-access-8r548\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.895685 master-0 kubenswrapper[32968]: I0309 16:56:18.895614 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.908375 master-0 kubenswrapper[32968]: I0309 16:56:18.908268 32968 generic.go:334] "Generic (PLEG): container finished" podID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerID="6e7cbd9ba6ae09a32528b6d3c84274f886a05f8df397fe102c055ac6aee4dec2" exitCode=0 Mar 09 16:56:18.908375 master-0 kubenswrapper[32968]: I0309 16:56:18.908355 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" event={"ID":"f9c98f5f-3b96-49b4-8998-2cca02af4736","Type":"ContainerDied","Data":"6e7cbd9ba6ae09a32528b6d3c84274f886a05f8df397fe102c055ac6aee4dec2"} Mar 09 16:56:18.908375 master-0 kubenswrapper[32968]: I0309 16:56:18.908392 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" event={"ID":"f9c98f5f-3b96-49b4-8998-2cca02af4736","Type":"ContainerStarted","Data":"22faaa3cf46fbaf9a98e4f67e24c6332e0f7ac94c663c62712377335df5ec38e"} Mar 09 16:56:18.911749 master-0 kubenswrapper[32968]: I0309 16:56:18.910620 32968 generic.go:334] "Generic (PLEG): container finished" podID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerID="2d5e60ccc8f60c88dc473edfd4f7fa185f0d6f4175f79d3d6c594cf92b8050ea" exitCode=0 Mar 09 16:56:18.911749 master-0 kubenswrapper[32968]: I0309 16:56:18.910655 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" event={"ID":"f80073a5-86c9-4ba8-8c4d-c30572de40f3","Type":"ContainerDied","Data":"2d5e60ccc8f60c88dc473edfd4f7fa185f0d6f4175f79d3d6c594cf92b8050ea"} Mar 09 16:56:18.997375 master-0 kubenswrapper[32968]: I0309 16:56:18.997269 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.997892 master-0 kubenswrapper[32968]: I0309 16:56:18.997698 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.997969 master-0 kubenswrapper[32968]: I0309 16:56:18.997897 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r548\" (UniqueName: \"kubernetes.io/projected/917a8eab-f666-4112-9685-2f971b873813-kube-api-access-8r548\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.997969 master-0 kubenswrapper[32968]: I0309 16:56:18.997925 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:18.998532 master-0 kubenswrapper[32968]: I0309 16:56:18.998478 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:19.014998 master-0 kubenswrapper[32968]: I0309 16:56:19.014927 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r548\" (UniqueName: \"kubernetes.io/projected/917a8eab-f666-4112-9685-2f971b873813-kube-api-access-8r548\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:19.110219 master-0 kubenswrapper[32968]: I0309 16:56:19.110059 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:19.743641 master-0 kubenswrapper[32968]: I0309 16:56:19.743583 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g"] Mar 09 16:56:19.926206 master-0 kubenswrapper[32968]: I0309 16:56:19.926111 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" event={"ID":"917a8eab-f666-4112-9685-2f971b873813","Type":"ContainerStarted","Data":"d65489fb865035932ea95ccf598dca460deb9ffb07d631aa4dd435a42ec85889"} Mar 09 16:56:19.926206 master-0 kubenswrapper[32968]: I0309 16:56:19.926188 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" event={"ID":"917a8eab-f666-4112-9685-2f971b873813","Type":"ContainerStarted","Data":"4ae6b858f5e9581bd66582b3c9e22b555a77742a310a92e8ffe88922eabc64c9"} Mar 09 16:56:20.949790 master-0 kubenswrapper[32968]: I0309 16:56:20.949699 32968 generic.go:334] "Generic (PLEG): container finished" podID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerID="7fff0db31d2d9b54689fabdee092a5845d998578205f754f673a02299ccd6894" exitCode=0 Mar 09 16:56:20.950438 master-0 kubenswrapper[32968]: I0309 16:56:20.949823 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" event={"ID":"f9c98f5f-3b96-49b4-8998-2cca02af4736","Type":"ContainerDied","Data":"7fff0db31d2d9b54689fabdee092a5845d998578205f754f673a02299ccd6894"} Mar 09 16:56:20.953437 master-0 kubenswrapper[32968]: I0309 16:56:20.953355 32968 generic.go:334] "Generic (PLEG): container finished" podID="917a8eab-f666-4112-9685-2f971b873813" containerID="d65489fb865035932ea95ccf598dca460deb9ffb07d631aa4dd435a42ec85889" exitCode=0 Mar 09 16:56:20.953557 master-0 kubenswrapper[32968]: I0309 16:56:20.953464 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" event={"ID":"917a8eab-f666-4112-9685-2f971b873813","Type":"ContainerDied","Data":"d65489fb865035932ea95ccf598dca460deb9ffb07d631aa4dd435a42ec85889"} Mar 09 16:56:21.965463 master-0 kubenswrapper[32968]: I0309 16:56:21.965363 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" event={"ID":"f80073a5-86c9-4ba8-8c4d-c30572de40f3","Type":"ContainerStarted","Data":"48e7dd89057564f51a407015738c35c63fb1bdb6920d5468b5823347eb5b3240"} Mar 09 16:56:21.970612 master-0 kubenswrapper[32968]: I0309 16:56:21.970532 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" event={"ID":"f9c98f5f-3b96-49b4-8998-2cca02af4736","Type":"ContainerStarted","Data":"a100cf84246679740c0fe2fc6075f9323cbd5b726edd7a45ed2f8f2c4b7ec80c"} Mar 09 16:56:22.424241 master-0 kubenswrapper[32968]: I0309 16:56:22.423947 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" podStartSLOduration=3.9144543990000003 podStartE2EDuration="5.423924632s" podCreationTimestamp="2026-03-09 16:56:17 +0000 UTC" firstStartedPulling="2026-03-09 16:56:18.91025399 +0000 UTC m=+605.013576520" lastFinishedPulling="2026-03-09 16:56:20.419724213 +0000 UTC m=+606.523046753" observedRunningTime="2026-03-09 16:56:22.421357321 +0000 UTC m=+608.524679861" watchObservedRunningTime="2026-03-09 16:56:22.423924632 +0000 UTC m=+608.527247202" Mar 09 16:56:22.981578 master-0 kubenswrapper[32968]: I0309 16:56:22.981435 32968 generic.go:334] "Generic (PLEG): container finished" podID="917a8eab-f666-4112-9685-2f971b873813" containerID="124be36c91e509f1e44f11a5fe33491fa2ef95ea41d8a79779d7d95d1a6d452d" exitCode=0 Mar 09 16:56:22.982116 master-0 kubenswrapper[32968]: I0309 16:56:22.981576 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" event={"ID":"917a8eab-f666-4112-9685-2f971b873813","Type":"ContainerDied","Data":"124be36c91e509f1e44f11a5fe33491fa2ef95ea41d8a79779d7d95d1a6d452d"} Mar 09 16:56:22.984876 master-0 kubenswrapper[32968]: I0309 16:56:22.984796 32968 generic.go:334] "Generic (PLEG): container finished" podID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerID="a100cf84246679740c0fe2fc6075f9323cbd5b726edd7a45ed2f8f2c4b7ec80c" exitCode=0 Mar 09 16:56:22.984998 master-0 kubenswrapper[32968]: I0309 16:56:22.984879 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" event={"ID":"f9c98f5f-3b96-49b4-8998-2cca02af4736","Type":"ContainerDied","Data":"a100cf84246679740c0fe2fc6075f9323cbd5b726edd7a45ed2f8f2c4b7ec80c"} Mar 09 16:56:22.988569 master-0 kubenswrapper[32968]: I0309 16:56:22.988522 32968 generic.go:334] "Generic (PLEG): container finished" podID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerID="48e7dd89057564f51a407015738c35c63fb1bdb6920d5468b5823347eb5b3240" exitCode=0 Mar 09 16:56:22.988658 master-0 kubenswrapper[32968]: I0309 16:56:22.988581 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" event={"ID":"f80073a5-86c9-4ba8-8c4d-c30572de40f3","Type":"ContainerDied","Data":"48e7dd89057564f51a407015738c35c63fb1bdb6920d5468b5823347eb5b3240"} Mar 09 16:56:24.000757 master-0 kubenswrapper[32968]: I0309 16:56:24.000654 32968 generic.go:334] "Generic (PLEG): container finished" podID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerID="089330b7e670d938944955dbc8b8322bee203c1b3fa476803827dcbee5f6b6b0" exitCode=0 Mar 09 16:56:24.000757 master-0 kubenswrapper[32968]: I0309 16:56:24.000725 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" event={"ID":"f80073a5-86c9-4ba8-8c4d-c30572de40f3","Type":"ContainerDied","Data":"089330b7e670d938944955dbc8b8322bee203c1b3fa476803827dcbee5f6b6b0"} Mar 09 16:56:24.003922 master-0 kubenswrapper[32968]: I0309 16:56:24.003858 32968 generic.go:334] "Generic (PLEG): container finished" podID="917a8eab-f666-4112-9685-2f971b873813" containerID="f6d2ea96017c72f390206e1773c89ffb80596b7612ab670788f8f37b62cabb72" exitCode=0 Mar 09 16:56:24.004037 master-0 kubenswrapper[32968]: I0309 16:56:24.003913 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" event={"ID":"917a8eab-f666-4112-9685-2f971b873813","Type":"ContainerDied","Data":"f6d2ea96017c72f390206e1773c89ffb80596b7612ab670788f8f37b62cabb72"} Mar 09 16:56:24.328184 master-0 kubenswrapper[32968]: I0309 16:56:24.328127 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:24.451088 master-0 kubenswrapper[32968]: I0309 16:56:24.451000 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-bundle\") pod \"f9c98f5f-3b96-49b4-8998-2cca02af4736\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " Mar 09 16:56:24.451088 master-0 kubenswrapper[32968]: I0309 16:56:24.451105 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72r5\" (UniqueName: \"kubernetes.io/projected/f9c98f5f-3b96-49b4-8998-2cca02af4736-kube-api-access-h72r5\") pod \"f9c98f5f-3b96-49b4-8998-2cca02af4736\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " Mar 09 16:56:24.451396 master-0 kubenswrapper[32968]: I0309 16:56:24.451268 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-util\") pod \"f9c98f5f-3b96-49b4-8998-2cca02af4736\" (UID: \"f9c98f5f-3b96-49b4-8998-2cca02af4736\") " Mar 09 16:56:24.452515 master-0 kubenswrapper[32968]: I0309 16:56:24.452459 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-bundle" (OuterVolumeSpecName: "bundle") pod "f9c98f5f-3b96-49b4-8998-2cca02af4736" (UID: "f9c98f5f-3b96-49b4-8998-2cca02af4736"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:24.455578 master-0 kubenswrapper[32968]: I0309 16:56:24.455453 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c98f5f-3b96-49b4-8998-2cca02af4736-kube-api-access-h72r5" (OuterVolumeSpecName: "kube-api-access-h72r5") pod "f9c98f5f-3b96-49b4-8998-2cca02af4736" (UID: "f9c98f5f-3b96-49b4-8998-2cca02af4736"). InnerVolumeSpecName "kube-api-access-h72r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:56:24.462236 master-0 kubenswrapper[32968]: I0309 16:56:24.462134 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-util" (OuterVolumeSpecName: "util") pod "f9c98f5f-3b96-49b4-8998-2cca02af4736" (UID: "f9c98f5f-3b96-49b4-8998-2cca02af4736"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:24.554071 master-0 kubenswrapper[32968]: I0309 16:56:24.553969 32968 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:24.554071 master-0 kubenswrapper[32968]: I0309 16:56:24.554025 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h72r5\" (UniqueName: \"kubernetes.io/projected/f9c98f5f-3b96-49b4-8998-2cca02af4736-kube-api-access-h72r5\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:24.554071 master-0 kubenswrapper[32968]: I0309 16:56:24.554035 32968 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9c98f5f-3b96-49b4-8998-2cca02af4736-util\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:24.978767 master-0 kubenswrapper[32968]: I0309 16:56:24.978704 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2"] Mar 09 16:56:24.979093 master-0 kubenswrapper[32968]: E0309 16:56:24.979062 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="util" Mar 09 16:56:24.979093 master-0 kubenswrapper[32968]: I0309 16:56:24.979089 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="util" Mar 09 16:56:24.979193 master-0 kubenswrapper[32968]: E0309 16:56:24.979120 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="extract" Mar 09 16:56:24.979193 master-0 kubenswrapper[32968]: I0309 16:56:24.979129 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="extract" Mar 09 16:56:24.979193 master-0 kubenswrapper[32968]: E0309 16:56:24.979184 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="pull" Mar 09 16:56:24.979193 master-0 kubenswrapper[32968]: I0309 16:56:24.979192 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="pull" Mar 09 16:56:24.979443 master-0 kubenswrapper[32968]: I0309 16:56:24.979391 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c98f5f-3b96-49b4-8998-2cca02af4736" containerName="extract" Mar 09 16:56:24.980663 master-0 kubenswrapper[32968]: I0309 16:56:24.980633 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:24.990113 master-0 kubenswrapper[32968]: I0309 16:56:24.990054 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2"] Mar 09 16:56:25.018828 master-0 kubenswrapper[32968]: I0309 16:56:25.018743 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" event={"ID":"f9c98f5f-3b96-49b4-8998-2cca02af4736","Type":"ContainerDied","Data":"22faaa3cf46fbaf9a98e4f67e24c6332e0f7ac94c663c62712377335df5ec38e"} Mar 09 16:56:25.018828 master-0 kubenswrapper[32968]: I0309 16:56:25.018809 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4xbn2s" Mar 09 16:56:25.019439 master-0 kubenswrapper[32968]: I0309 16:56:25.018815 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22faaa3cf46fbaf9a98e4f67e24c6332e0f7ac94c663c62712377335df5ec38e" Mar 09 16:56:25.062353 master-0 kubenswrapper[32968]: I0309 16:56:25.062262 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.062808 master-0 kubenswrapper[32968]: I0309 16:56:25.062397 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgk6\" (UniqueName: \"kubernetes.io/projected/ac418822-ffae-4170-9759-4f9e465b489b-kube-api-access-sxgk6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.063260 master-0 kubenswrapper[32968]: I0309 16:56:25.063221 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.164075 master-0 kubenswrapper[32968]: I0309 16:56:25.163982 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgk6\" (UniqueName: \"kubernetes.io/projected/ac418822-ffae-4170-9759-4f9e465b489b-kube-api-access-sxgk6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.164710 master-0 kubenswrapper[32968]: I0309 16:56:25.164356 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.164710 master-0 kubenswrapper[32968]: I0309 16:56:25.164535 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.165150 master-0 kubenswrapper[32968]: I0309 16:56:25.165018 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.165150 master-0 kubenswrapper[32968]: I0309 16:56:25.165067 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.189986 master-0 kubenswrapper[32968]: I0309 16:56:25.189918 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgk6\" (UniqueName: \"kubernetes.io/projected/ac418822-ffae-4170-9759-4f9e465b489b-kube-api-access-sxgk6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.300827 master-0 kubenswrapper[32968]: I0309 16:56:25.300695 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:25.452305 master-0 kubenswrapper[32968]: I0309 16:56:25.452229 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:25.456362 master-0 kubenswrapper[32968]: I0309 16:56:25.456196 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:25.476449 master-0 kubenswrapper[32968]: I0309 16:56:25.473646 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-util\") pod \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " Mar 09 16:56:25.476449 master-0 kubenswrapper[32968]: I0309 16:56:25.473863 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r548\" (UniqueName: \"kubernetes.io/projected/917a8eab-f666-4112-9685-2f971b873813-kube-api-access-8r548\") pod \"917a8eab-f666-4112-9685-2f971b873813\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " Mar 09 16:56:25.476449 master-0 kubenswrapper[32968]: I0309 16:56:25.473899 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-bundle\") pod \"917a8eab-f666-4112-9685-2f971b873813\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " Mar 09 16:56:25.476449 master-0 kubenswrapper[32968]: I0309 16:56:25.473997 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-bundle\") pod \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " Mar 09 16:56:25.476449 master-0 kubenswrapper[32968]: I0309 16:56:25.474036 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-util\") pod \"917a8eab-f666-4112-9685-2f971b873813\" (UID: \"917a8eab-f666-4112-9685-2f971b873813\") " Mar 09 16:56:25.476449 master-0 kubenswrapper[32968]: I0309 16:56:25.474108 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmk6\" (UniqueName: \"kubernetes.io/projected/f80073a5-86c9-4ba8-8c4d-c30572de40f3-kube-api-access-rwmk6\") pod \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\" (UID: \"f80073a5-86c9-4ba8-8c4d-c30572de40f3\") " Mar 09 16:56:25.477124 master-0 kubenswrapper[32968]: I0309 16:56:25.476753 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-bundle" (OuterVolumeSpecName: "bundle") pod "917a8eab-f666-4112-9685-2f971b873813" (UID: "917a8eab-f666-4112-9685-2f971b873813"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:25.477585 master-0 kubenswrapper[32968]: I0309 16:56:25.477507 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-bundle" (OuterVolumeSpecName: "bundle") pod "f80073a5-86c9-4ba8-8c4d-c30572de40f3" (UID: "f80073a5-86c9-4ba8-8c4d-c30572de40f3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:25.483729 master-0 kubenswrapper[32968]: I0309 16:56:25.483597 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80073a5-86c9-4ba8-8c4d-c30572de40f3-kube-api-access-rwmk6" (OuterVolumeSpecName: "kube-api-access-rwmk6") pod "f80073a5-86c9-4ba8-8c4d-c30572de40f3" (UID: "f80073a5-86c9-4ba8-8c4d-c30572de40f3"). InnerVolumeSpecName "kube-api-access-rwmk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:56:25.488617 master-0 kubenswrapper[32968]: I0309 16:56:25.484928 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917a8eab-f666-4112-9685-2f971b873813-kube-api-access-8r548" (OuterVolumeSpecName: "kube-api-access-8r548") pod "917a8eab-f666-4112-9685-2f971b873813" (UID: "917a8eab-f666-4112-9685-2f971b873813"). InnerVolumeSpecName "kube-api-access-8r548". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:56:25.494733 master-0 kubenswrapper[32968]: I0309 16:56:25.494591 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-util" (OuterVolumeSpecName: "util") pod "f80073a5-86c9-4ba8-8c4d-c30572de40f3" (UID: "f80073a5-86c9-4ba8-8c4d-c30572de40f3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:25.496245 master-0 kubenswrapper[32968]: I0309 16:56:25.496169 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-util" (OuterVolumeSpecName: "util") pod "917a8eab-f666-4112-9685-2f971b873813" (UID: "917a8eab-f666-4112-9685-2f971b873813"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:25.578236 master-0 kubenswrapper[32968]: I0309 16:56:25.578068 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r548\" (UniqueName: \"kubernetes.io/projected/917a8eab-f666-4112-9685-2f971b873813-kube-api-access-8r548\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:25.578236 master-0 kubenswrapper[32968]: I0309 16:56:25.578128 32968 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:25.578236 master-0 kubenswrapper[32968]: I0309 16:56:25.578143 32968 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:25.578236 master-0 kubenswrapper[32968]: I0309 16:56:25.578154 32968 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/917a8eab-f666-4112-9685-2f971b873813-util\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:25.578236 master-0 kubenswrapper[32968]: I0309 16:56:25.578165 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmk6\" (UniqueName: \"kubernetes.io/projected/f80073a5-86c9-4ba8-8c4d-c30572de40f3-kube-api-access-rwmk6\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:25.578236 master-0 kubenswrapper[32968]: I0309 16:56:25.578178 32968 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f80073a5-86c9-4ba8-8c4d-c30572de40f3-util\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:25.740893 master-0 kubenswrapper[32968]: I0309 16:56:25.740806 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2"] Mar 09 16:56:25.749509 master-0 kubenswrapper[32968]: W0309 16:56:25.749436 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac418822_ffae_4170_9759_4f9e465b489b.slice/crio-e3fca4688f5ec8b6854178a1120314cd36d88c0bfb6f6c1f3a60dcc757de6ce5 WatchSource:0}: Error finding container e3fca4688f5ec8b6854178a1120314cd36d88c0bfb6f6c1f3a60dcc757de6ce5: Status 404 returned error can't find the container with id e3fca4688f5ec8b6854178a1120314cd36d88c0bfb6f6c1f3a60dcc757de6ce5 Mar 09 16:56:26.029679 master-0 kubenswrapper[32968]: I0309 16:56:26.029625 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" event={"ID":"f80073a5-86c9-4ba8-8c4d-c30572de40f3","Type":"ContainerDied","Data":"7081352efc1038ad35c9cfd268e9227e91f1dab6ffbec5921afd025fae1a77da"} Mar 09 16:56:26.029679 master-0 kubenswrapper[32968]: I0309 16:56:26.029673 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7081352efc1038ad35c9cfd268e9227e91f1dab6ffbec5921afd025fae1a77da" Mar 09 16:56:26.029679 master-0 kubenswrapper[32968]: I0309 16:56:26.029673 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5glhg6" Mar 09 16:56:26.033352 master-0 kubenswrapper[32968]: I0309 16:56:26.033309 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" event={"ID":"917a8eab-f666-4112-9685-2f971b873813","Type":"ContainerDied","Data":"4ae6b858f5e9581bd66582b3c9e22b555a77742a310a92e8ffe88922eabc64c9"} Mar 09 16:56:26.033352 master-0 kubenswrapper[32968]: I0309 16:56:26.033340 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae6b858f5e9581bd66582b3c9e22b555a77742a310a92e8ffe88922eabc64c9" Mar 09 16:56:26.033514 master-0 kubenswrapper[32968]: I0309 16:56:26.033389 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82rgr5g" Mar 09 16:56:26.037558 master-0 kubenswrapper[32968]: I0309 16:56:26.037508 32968 generic.go:334] "Generic (PLEG): container finished" podID="ac418822-ffae-4170-9759-4f9e465b489b" containerID="7b0513fdee0023afa4551f442d992c99dda748a9e31520f8e1b5fddd5f804ae9" exitCode=0 Mar 09 16:56:26.037558 master-0 kubenswrapper[32968]: I0309 16:56:26.037550 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerDied","Data":"7b0513fdee0023afa4551f442d992c99dda748a9e31520f8e1b5fddd5f804ae9"} Mar 09 16:56:26.037693 master-0 kubenswrapper[32968]: I0309 16:56:26.037571 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerStarted","Data":"e3fca4688f5ec8b6854178a1120314cd36d88c0bfb6f6c1f3a60dcc757de6ce5"} Mar 09 16:56:31.564683 master-0 kubenswrapper[32968]: I0309 16:56:31.564565 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s"] Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: E0309 16:56:31.564949 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="util" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.564970 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="util" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: E0309 16:56:31.564995 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="pull" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565002 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="pull" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: E0309 16:56:31.565028 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="extract" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565036 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="extract" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: E0309 16:56:31.565050 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="util" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565059 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="util" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: E0309 16:56:31.565069 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="extract" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565076 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="extract" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: E0309 16:56:31.565109 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="pull" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565116 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="pull" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565292 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="917a8eab-f666-4112-9685-2f971b873813" containerName="extract" Mar 09 16:56:31.565395 master-0 kubenswrapper[32968]: I0309 16:56:31.565320 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80073a5-86c9-4ba8-8c4d-c30572de40f3" containerName="extract" Mar 09 16:56:31.565967 master-0 kubenswrapper[32968]: I0309 16:56:31.565956 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.568363 master-0 kubenswrapper[32968]: I0309 16:56:31.568329 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 09 16:56:31.568596 master-0 kubenswrapper[32968]: I0309 16:56:31.568416 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 09 16:56:31.583969 master-0 kubenswrapper[32968]: I0309 16:56:31.583924 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvhdg\" (UniqueName: \"kubernetes.io/projected/bfb9f981-97a9-46b3-beed-c626b7cd7170-kube-api-access-jvhdg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-psc2s\" (UID: \"bfb9f981-97a9-46b3-beed-c626b7cd7170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.584323 master-0 kubenswrapper[32968]: I0309 16:56:31.584300 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfb9f981-97a9-46b3-beed-c626b7cd7170-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-psc2s\" (UID: \"bfb9f981-97a9-46b3-beed-c626b7cd7170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.589532 master-0 kubenswrapper[32968]: I0309 16:56:31.589360 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s"] Mar 09 16:56:31.686582 master-0 kubenswrapper[32968]: I0309 16:56:31.686483 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvhdg\" (UniqueName: \"kubernetes.io/projected/bfb9f981-97a9-46b3-beed-c626b7cd7170-kube-api-access-jvhdg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-psc2s\" (UID: \"bfb9f981-97a9-46b3-beed-c626b7cd7170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.686582 master-0 kubenswrapper[32968]: I0309 16:56:31.686575 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfb9f981-97a9-46b3-beed-c626b7cd7170-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-psc2s\" (UID: \"bfb9f981-97a9-46b3-beed-c626b7cd7170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.687375 master-0 kubenswrapper[32968]: I0309 16:56:31.687331 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bfb9f981-97a9-46b3-beed-c626b7cd7170-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-psc2s\" (UID: \"bfb9f981-97a9-46b3-beed-c626b7cd7170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.705448 master-0 kubenswrapper[32968]: I0309 16:56:31.705304 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvhdg\" (UniqueName: \"kubernetes.io/projected/bfb9f981-97a9-46b3-beed-c626b7cd7170-kube-api-access-jvhdg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-psc2s\" (UID: \"bfb9f981-97a9-46b3-beed-c626b7cd7170\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:31.886226 master-0 kubenswrapper[32968]: I0309 16:56:31.886039 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" Mar 09 16:56:32.656745 master-0 kubenswrapper[32968]: I0309 16:56:32.656675 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s"] Mar 09 16:56:32.657371 master-0 kubenswrapper[32968]: W0309 16:56:32.657303 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb9f981_97a9_46b3_beed_c626b7cd7170.slice/crio-fbbf8a5415f3f9aedbe9af633ac201c3e1aa4e8507ee9f8d117685247e436541 WatchSource:0}: Error finding container fbbf8a5415f3f9aedbe9af633ac201c3e1aa4e8507ee9f8d117685247e436541: Status 404 returned error can't find the container with id fbbf8a5415f3f9aedbe9af633ac201c3e1aa4e8507ee9f8d117685247e436541 Mar 09 16:56:33.126757 master-0 kubenswrapper[32968]: I0309 16:56:33.126673 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" event={"ID":"bfb9f981-97a9-46b3-beed-c626b7cd7170","Type":"ContainerStarted","Data":"fbbf8a5415f3f9aedbe9af633ac201c3e1aa4e8507ee9f8d117685247e436541"} Mar 09 16:56:37.162840 master-0 kubenswrapper[32968]: I0309 16:56:37.162705 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" event={"ID":"bfb9f981-97a9-46b3-beed-c626b7cd7170","Type":"ContainerStarted","Data":"38bb5148dfc8dda778be15ab0c0745fcc296beb8456801e32e7c3534993be3ba"} Mar 09 16:56:37.205834 master-0 kubenswrapper[32968]: I0309 16:56:37.205731 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-psc2s" podStartSLOduration=2.047275681 podStartE2EDuration="6.205709138s" podCreationTimestamp="2026-03-09 16:56:31 +0000 UTC" firstStartedPulling="2026-03-09 16:56:32.673286848 +0000 UTC m=+618.776609388" lastFinishedPulling="2026-03-09 16:56:36.831720305 +0000 UTC m=+622.935042845" observedRunningTime="2026-03-09 16:56:37.20252032 +0000 UTC m=+623.305842880" watchObservedRunningTime="2026-03-09 16:56:37.205709138 +0000 UTC m=+623.309031668" Mar 09 16:56:40.238771 master-0 kubenswrapper[32968]: I0309 16:56:40.238656 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-v9tt6"] Mar 09 16:56:40.242921 master-0 kubenswrapper[32968]: I0309 16:56:40.240395 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.246247 master-0 kubenswrapper[32968]: I0309 16:56:40.246184 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 09 16:56:40.246528 master-0 kubenswrapper[32968]: I0309 16:56:40.246474 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 09 16:56:40.267593 master-0 kubenswrapper[32968]: I0309 16:56:40.267533 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-v9tt6"] Mar 09 16:56:40.407544 master-0 kubenswrapper[32968]: I0309 16:56:40.407463 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c71de91-357c-432d-8777-553e6c2d301d-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-v9tt6\" (UID: \"9c71de91-357c-432d-8777-553e6c2d301d\") " pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.407544 master-0 kubenswrapper[32968]: I0309 16:56:40.407533 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg6s6\" (UniqueName: \"kubernetes.io/projected/9c71de91-357c-432d-8777-553e6c2d301d-kube-api-access-dg6s6\") pod \"cert-manager-webhook-6888856db4-v9tt6\" (UID: \"9c71de91-357c-432d-8777-553e6c2d301d\") " pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.509457 master-0 kubenswrapper[32968]: I0309 16:56:40.509271 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c71de91-357c-432d-8777-553e6c2d301d-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-v9tt6\" (UID: \"9c71de91-357c-432d-8777-553e6c2d301d\") " pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.509457 master-0 kubenswrapper[32968]: I0309 16:56:40.509358 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg6s6\" (UniqueName: \"kubernetes.io/projected/9c71de91-357c-432d-8777-553e6c2d301d-kube-api-access-dg6s6\") pod \"cert-manager-webhook-6888856db4-v9tt6\" (UID: \"9c71de91-357c-432d-8777-553e6c2d301d\") " pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.543990 master-0 kubenswrapper[32968]: I0309 16:56:40.541118 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c71de91-357c-432d-8777-553e6c2d301d-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-v9tt6\" (UID: \"9c71de91-357c-432d-8777-553e6c2d301d\") " pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.551468 master-0 kubenswrapper[32968]: I0309 16:56:40.547574 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg6s6\" (UniqueName: \"kubernetes.io/projected/9c71de91-357c-432d-8777-553e6c2d301d-kube-api-access-dg6s6\") pod \"cert-manager-webhook-6888856db4-v9tt6\" (UID: \"9c71de91-357c-432d-8777-553e6c2d301d\") " pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:40.574549 master-0 kubenswrapper[32968]: I0309 16:56:40.574448 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:41.296071 master-0 kubenswrapper[32968]: I0309 16:56:41.296000 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-v9tt6"] Mar 09 16:56:42.221458 master-0 kubenswrapper[32968]: I0309 16:56:42.221330 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" event={"ID":"9c71de91-357c-432d-8777-553e6c2d301d","Type":"ContainerStarted","Data":"d3415d996cae084a86e0acb893344729c59d1661b3608c00e1236730b524e887"} Mar 09 16:56:44.125500 master-0 kubenswrapper[32968]: I0309 16:56:44.125399 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2tlzg"] Mar 09 16:56:44.126489 master-0 kubenswrapper[32968]: I0309 16:56:44.126460 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.196560 master-0 kubenswrapper[32968]: I0309 16:56:44.196491 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2tlzg"] Mar 09 16:56:44.230182 master-0 kubenswrapper[32968]: I0309 16:56:44.230077 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5d276de-2ee5-43d0-909a-5fa62b30b5de-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2tlzg\" (UID: \"b5d276de-2ee5-43d0-909a-5fa62b30b5de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.230589 master-0 kubenswrapper[32968]: I0309 16:56:44.230224 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6526\" (UniqueName: \"kubernetes.io/projected/b5d276de-2ee5-43d0-909a-5fa62b30b5de-kube-api-access-n6526\") pod \"cert-manager-cainjector-5545bd876-2tlzg\" (UID: \"b5d276de-2ee5-43d0-909a-5fa62b30b5de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.283192 master-0 kubenswrapper[32968]: I0309 16:56:44.282011 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt"] Mar 09 16:56:44.288938 master-0 kubenswrapper[32968]: I0309 16:56:44.283726 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" Mar 09 16:56:44.288938 master-0 kubenswrapper[32968]: I0309 16:56:44.287961 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 09 16:56:44.297945 master-0 kubenswrapper[32968]: I0309 16:56:44.297857 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 09 16:56:44.332033 master-0 kubenswrapper[32968]: I0309 16:56:44.331930 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt"] Mar 09 16:56:44.333829 master-0 kubenswrapper[32968]: I0309 16:56:44.333760 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5d276de-2ee5-43d0-909a-5fa62b30b5de-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2tlzg\" (UID: \"b5d276de-2ee5-43d0-909a-5fa62b30b5de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.334109 master-0 kubenswrapper[32968]: I0309 16:56:44.334072 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9kq\" (UniqueName: \"kubernetes.io/projected/d43c8a33-8ade-41e9-997c-858cbdbb5801-kube-api-access-6r9kq\") pod \"nmstate-operator-75c5dccd6c-lx6wt\" (UID: \"d43c8a33-8ade-41e9-997c-858cbdbb5801\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" Mar 09 16:56:44.334239 master-0 kubenswrapper[32968]: I0309 16:56:44.334192 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6526\" (UniqueName: \"kubernetes.io/projected/b5d276de-2ee5-43d0-909a-5fa62b30b5de-kube-api-access-n6526\") pod \"cert-manager-cainjector-5545bd876-2tlzg\" (UID: \"b5d276de-2ee5-43d0-909a-5fa62b30b5de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.365284 master-0 kubenswrapper[32968]: I0309 16:56:44.365228 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6526\" (UniqueName: \"kubernetes.io/projected/b5d276de-2ee5-43d0-909a-5fa62b30b5de-kube-api-access-n6526\") pod \"cert-manager-cainjector-5545bd876-2tlzg\" (UID: \"b5d276de-2ee5-43d0-909a-5fa62b30b5de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.368251 master-0 kubenswrapper[32968]: I0309 16:56:44.368182 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5d276de-2ee5-43d0-909a-5fa62b30b5de-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-2tlzg\" (UID: \"b5d276de-2ee5-43d0-909a-5fa62b30b5de\") " pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.437608 master-0 kubenswrapper[32968]: I0309 16:56:44.436788 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9kq\" (UniqueName: \"kubernetes.io/projected/d43c8a33-8ade-41e9-997c-858cbdbb5801-kube-api-access-6r9kq\") pod \"nmstate-operator-75c5dccd6c-lx6wt\" (UID: \"d43c8a33-8ade-41e9-997c-858cbdbb5801\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" Mar 09 16:56:44.447335 master-0 kubenswrapper[32968]: I0309 16:56:44.447265 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" Mar 09 16:56:44.459369 master-0 kubenswrapper[32968]: I0309 16:56:44.459316 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9kq\" (UniqueName: \"kubernetes.io/projected/d43c8a33-8ade-41e9-997c-858cbdbb5801-kube-api-access-6r9kq\") pod \"nmstate-operator-75c5dccd6c-lx6wt\" (UID: \"d43c8a33-8ade-41e9-997c-858cbdbb5801\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" Mar 09 16:56:44.629726 master-0 kubenswrapper[32968]: I0309 16:56:44.629660 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" Mar 09 16:56:44.921849 master-0 kubenswrapper[32968]: I0309 16:56:44.921766 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-2tlzg"] Mar 09 16:56:45.130823 master-0 kubenswrapper[32968]: I0309 16:56:45.130709 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt"] Mar 09 16:56:45.169561 master-0 kubenswrapper[32968]: W0309 16:56:45.169479 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd43c8a33_8ade_41e9_997c_858cbdbb5801.slice/crio-639b9585693f841f1576566f3f27671ca5a6b418d9c79d925f728f14f93fb6ee WatchSource:0}: Error finding container 639b9585693f841f1576566f3f27671ca5a6b418d9c79d925f728f14f93fb6ee: Status 404 returned error can't find the container with id 639b9585693f841f1576566f3f27671ca5a6b418d9c79d925f728f14f93fb6ee Mar 09 16:56:45.269341 master-0 kubenswrapper[32968]: I0309 16:56:45.269267 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" event={"ID":"b5d276de-2ee5-43d0-909a-5fa62b30b5de","Type":"ContainerStarted","Data":"3a24cbc3593b2b9a163b026488e37b35c0caa65ab357b849968fd47e8015e60a"} Mar 09 16:56:45.274106 master-0 kubenswrapper[32968]: I0309 16:56:45.274063 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" event={"ID":"d43c8a33-8ade-41e9-997c-858cbdbb5801","Type":"ContainerStarted","Data":"639b9585693f841f1576566f3f27671ca5a6b418d9c79d925f728f14f93fb6ee"} Mar 09 16:56:46.299008 master-0 kubenswrapper[32968]: I0309 16:56:46.298239 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerStarted","Data":"461c49fb410e11c6f172999e4cb13c97a87e880f297132e239a1b2f6723edbfb"} Mar 09 16:56:48.324458 master-0 kubenswrapper[32968]: I0309 16:56:48.323290 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" event={"ID":"9c71de91-357c-432d-8777-553e6c2d301d","Type":"ContainerStarted","Data":"b9645c7b47829467100d12605b3dc59fa030cdab7768f9f2cb280f9d15032bac"} Mar 09 16:56:48.324458 master-0 kubenswrapper[32968]: I0309 16:56:48.323694 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:48.334485 master-0 kubenswrapper[32968]: I0309 16:56:48.334384 32968 generic.go:334] "Generic (PLEG): container finished" podID="ac418822-ffae-4170-9759-4f9e465b489b" containerID="461c49fb410e11c6f172999e4cb13c97a87e880f297132e239a1b2f6723edbfb" exitCode=0 Mar 09 16:56:48.334822 master-0 kubenswrapper[32968]: I0309 16:56:48.334560 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerDied","Data":"461c49fb410e11c6f172999e4cb13c97a87e880f297132e239a1b2f6723edbfb"} Mar 09 16:56:48.337540 master-0 kubenswrapper[32968]: I0309 16:56:48.337477 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" event={"ID":"b5d276de-2ee5-43d0-909a-5fa62b30b5de","Type":"ContainerStarted","Data":"d4ddc1153298c303bc20cd5f67e6710248af9c39745edcc580c17c7622fe5fcd"} Mar 09 16:56:48.363063 master-0 kubenswrapper[32968]: I0309 16:56:48.361917 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" podStartSLOduration=2.147501007 podStartE2EDuration="8.361883586s" podCreationTimestamp="2026-03-09 16:56:40 +0000 UTC" firstStartedPulling="2026-03-09 16:56:41.292312671 +0000 UTC m=+627.395635211" lastFinishedPulling="2026-03-09 16:56:47.50669525 +0000 UTC m=+633.610017790" observedRunningTime="2026-03-09 16:56:48.350903193 +0000 UTC m=+634.454225733" watchObservedRunningTime="2026-03-09 16:56:48.361883586 +0000 UTC m=+634.465206136" Mar 09 16:56:48.382754 master-0 kubenswrapper[32968]: I0309 16:56:48.382657 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-2tlzg" podStartSLOduration=1.818239833 podStartE2EDuration="4.38262222s" podCreationTimestamp="2026-03-09 16:56:44 +0000 UTC" firstStartedPulling="2026-03-09 16:56:44.94333098 +0000 UTC m=+631.046653520" lastFinishedPulling="2026-03-09 16:56:47.507713367 +0000 UTC m=+633.611035907" observedRunningTime="2026-03-09 16:56:48.376756548 +0000 UTC m=+634.480079088" watchObservedRunningTime="2026-03-09 16:56:48.38262222 +0000 UTC m=+634.485944760" Mar 09 16:56:49.347936 master-0 kubenswrapper[32968]: I0309 16:56:49.347816 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerStarted","Data":"daf4f7aa471235b49239d3ccd07783296752e0ae0742bb65626f19a77b8dc8ee"} Mar 09 16:56:49.858227 master-0 kubenswrapper[32968]: I0309 16:56:49.858131 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" podStartSLOduration=6.659981196 podStartE2EDuration="25.858092793s" podCreationTimestamp="2026-03-09 16:56:24 +0000 UTC" firstStartedPulling="2026-03-09 16:56:26.038737672 +0000 UTC m=+612.142060212" lastFinishedPulling="2026-03-09 16:56:45.236849269 +0000 UTC m=+631.340171809" observedRunningTime="2026-03-09 16:56:49.858004931 +0000 UTC m=+635.961327471" watchObservedRunningTime="2026-03-09 16:56:49.858092793 +0000 UTC m=+635.961415333" Mar 09 16:56:50.362351 master-0 kubenswrapper[32968]: I0309 16:56:50.362238 32968 generic.go:334] "Generic (PLEG): container finished" podID="ac418822-ffae-4170-9759-4f9e465b489b" containerID="daf4f7aa471235b49239d3ccd07783296752e0ae0742bb65626f19a77b8dc8ee" exitCode=0 Mar 09 16:56:50.362351 master-0 kubenswrapper[32968]: I0309 16:56:50.362300 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerDied","Data":"daf4f7aa471235b49239d3ccd07783296752e0ae0742bb65626f19a77b8dc8ee"} Mar 09 16:56:50.831117 master-0 kubenswrapper[32968]: I0309 16:56:50.831032 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt"] Mar 09 16:56:50.832712 master-0 kubenswrapper[32968]: I0309 16:56:50.832683 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:50.838861 master-0 kubenswrapper[32968]: I0309 16:56:50.838797 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 09 16:56:50.839155 master-0 kubenswrapper[32968]: I0309 16:56:50.838798 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 09 16:56:50.839155 master-0 kubenswrapper[32968]: I0309 16:56:50.838977 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 09 16:56:50.839155 master-0 kubenswrapper[32968]: I0309 16:56:50.839118 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 09 16:56:50.862792 master-0 kubenswrapper[32968]: I0309 16:56:50.862709 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt"] Mar 09 16:56:50.908822 master-0 kubenswrapper[32968]: I0309 16:56:50.908733 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e9b9776-457d-4dfb-815b-efbb16a07518-apiservice-cert\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:50.908822 master-0 kubenswrapper[32968]: I0309 16:56:50.908826 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226fw\" (UniqueName: \"kubernetes.io/projected/6e9b9776-457d-4dfb-815b-efbb16a07518-kube-api-access-226fw\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:50.909183 master-0 kubenswrapper[32968]: I0309 16:56:50.908864 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e9b9776-457d-4dfb-815b-efbb16a07518-webhook-cert\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.011677 master-0 kubenswrapper[32968]: I0309 16:56:51.011596 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e9b9776-457d-4dfb-815b-efbb16a07518-apiservice-cert\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.011677 master-0 kubenswrapper[32968]: I0309 16:56:51.011676 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226fw\" (UniqueName: \"kubernetes.io/projected/6e9b9776-457d-4dfb-815b-efbb16a07518-kube-api-access-226fw\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.011960 master-0 kubenswrapper[32968]: I0309 16:56:51.011718 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e9b9776-457d-4dfb-815b-efbb16a07518-webhook-cert\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.037463 master-0 kubenswrapper[32968]: I0309 16:56:51.035410 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e9b9776-457d-4dfb-815b-efbb16a07518-webhook-cert\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.037463 master-0 kubenswrapper[32968]: I0309 16:56:51.035989 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e9b9776-457d-4dfb-815b-efbb16a07518-apiservice-cert\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.042448 master-0 kubenswrapper[32968]: I0309 16:56:51.040208 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226fw\" (UniqueName: \"kubernetes.io/projected/6e9b9776-457d-4dfb-815b-efbb16a07518-kube-api-access-226fw\") pod \"metallb-operator-controller-manager-786c9ddc85-wqcbt\" (UID: \"6e9b9776-457d-4dfb-815b-efbb16a07518\") " pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.155109 master-0 kubenswrapper[32968]: I0309 16:56:51.154981 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:56:51.397672 master-0 kubenswrapper[32968]: I0309 16:56:51.397540 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" event={"ID":"d43c8a33-8ade-41e9-997c-858cbdbb5801","Type":"ContainerStarted","Data":"364d03c2d50c3b190d7af7ffd1d17d74f556066cc5470c036a870b15e66ce52f"} Mar 09 16:56:51.434069 master-0 kubenswrapper[32968]: I0309 16:56:51.433984 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-lx6wt" podStartSLOduration=2.297879603 podStartE2EDuration="7.433960091s" podCreationTimestamp="2026-03-09 16:56:44 +0000 UTC" firstStartedPulling="2026-03-09 16:56:45.173674524 +0000 UTC m=+631.276997064" lastFinishedPulling="2026-03-09 16:56:50.309755012 +0000 UTC m=+636.413077552" observedRunningTime="2026-03-09 16:56:51.426895035 +0000 UTC m=+637.530217585" watchObservedRunningTime="2026-03-09 16:56:51.433960091 +0000 UTC m=+637.537282631" Mar 09 16:56:51.638400 master-0 kubenswrapper[32968]: I0309 16:56:51.638265 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p"] Mar 09 16:56:51.639695 master-0 kubenswrapper[32968]: I0309 16:56:51.639669 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.651112 master-0 kubenswrapper[32968]: I0309 16:56:51.651019 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 09 16:56:51.651332 master-0 kubenswrapper[32968]: I0309 16:56:51.651019 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 09 16:56:51.671487 master-0 kubenswrapper[32968]: I0309 16:56:51.667846 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p"] Mar 09 16:56:51.737066 master-0 kubenswrapper[32968]: I0309 16:56:51.736949 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/471148f9-0403-4178-b482-9db02c8bf893-webhook-cert\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.737350 master-0 kubenswrapper[32968]: I0309 16:56:51.737328 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/471148f9-0403-4178-b482-9db02c8bf893-apiservice-cert\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.737496 master-0 kubenswrapper[32968]: I0309 16:56:51.737479 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k9jm\" (UniqueName: \"kubernetes.io/projected/471148f9-0403-4178-b482-9db02c8bf893-kube-api-access-7k9jm\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.794892 master-0 kubenswrapper[32968]: W0309 16:56:51.792988 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9b9776_457d_4dfb_815b_efbb16a07518.slice/crio-9f68d5ccbcdf6e350eff7f30fcbce4048195d46db7af5b809b6ee19a5e411bed WatchSource:0}: Error finding container 9f68d5ccbcdf6e350eff7f30fcbce4048195d46db7af5b809b6ee19a5e411bed: Status 404 returned error can't find the container with id 9f68d5ccbcdf6e350eff7f30fcbce4048195d46db7af5b809b6ee19a5e411bed Mar 09 16:56:51.816947 master-0 kubenswrapper[32968]: I0309 16:56:51.816857 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt"] Mar 09 16:56:51.840606 master-0 kubenswrapper[32968]: I0309 16:56:51.840375 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/471148f9-0403-4178-b482-9db02c8bf893-webhook-cert\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.843586 master-0 kubenswrapper[32968]: I0309 16:56:51.842302 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/471148f9-0403-4178-b482-9db02c8bf893-apiservice-cert\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.860459 master-0 kubenswrapper[32968]: I0309 16:56:51.857532 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/471148f9-0403-4178-b482-9db02c8bf893-apiservice-cert\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.866092 master-0 kubenswrapper[32968]: I0309 16:56:51.864667 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/471148f9-0403-4178-b482-9db02c8bf893-webhook-cert\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.866092 master-0 kubenswrapper[32968]: I0309 16:56:51.857032 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k9jm\" (UniqueName: \"kubernetes.io/projected/471148f9-0403-4178-b482-9db02c8bf893-kube-api-access-7k9jm\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.895465 master-0 kubenswrapper[32968]: I0309 16:56:51.895091 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k9jm\" (UniqueName: \"kubernetes.io/projected/471148f9-0403-4178-b482-9db02c8bf893-kube-api-access-7k9jm\") pod \"metallb-operator-webhook-server-89fbb7654-4rp6p\" (UID: \"471148f9-0403-4178-b482-9db02c8bf893\") " pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:51.952228 master-0 kubenswrapper[32968]: I0309 16:56:51.951910 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:52.069734 master-0 kubenswrapper[32968]: I0309 16:56:52.069654 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-util\") pod \"ac418822-ffae-4170-9759-4f9e465b489b\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " Mar 09 16:56:52.069974 master-0 kubenswrapper[32968]: I0309 16:56:52.069867 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-bundle\") pod \"ac418822-ffae-4170-9759-4f9e465b489b\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " Mar 09 16:56:52.069974 master-0 kubenswrapper[32968]: I0309 16:56:52.069951 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxgk6\" (UniqueName: \"kubernetes.io/projected/ac418822-ffae-4170-9759-4f9e465b489b-kube-api-access-sxgk6\") pod \"ac418822-ffae-4170-9759-4f9e465b489b\" (UID: \"ac418822-ffae-4170-9759-4f9e465b489b\") " Mar 09 16:56:52.073169 master-0 kubenswrapper[32968]: I0309 16:56:52.073077 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-bundle" (OuterVolumeSpecName: "bundle") pod "ac418822-ffae-4170-9759-4f9e465b489b" (UID: "ac418822-ffae-4170-9759-4f9e465b489b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:52.074964 master-0 kubenswrapper[32968]: I0309 16:56:52.074877 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac418822-ffae-4170-9759-4f9e465b489b-kube-api-access-sxgk6" (OuterVolumeSpecName: "kube-api-access-sxgk6") pod "ac418822-ffae-4170-9759-4f9e465b489b" (UID: "ac418822-ffae-4170-9759-4f9e465b489b"). InnerVolumeSpecName "kube-api-access-sxgk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:56:52.083997 master-0 kubenswrapper[32968]: I0309 16:56:52.083917 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-util" (OuterVolumeSpecName: "util") pod "ac418822-ffae-4170-9759-4f9e465b489b" (UID: "ac418822-ffae-4170-9759-4f9e465b489b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 09 16:56:52.147562 master-0 kubenswrapper[32968]: I0309 16:56:52.147511 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:56:52.175663 master-0 kubenswrapper[32968]: I0309 16:56:52.172858 32968 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-util\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:52.175663 master-0 kubenswrapper[32968]: I0309 16:56:52.172915 32968 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac418822-ffae-4170-9759-4f9e465b489b-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:52.175663 master-0 kubenswrapper[32968]: I0309 16:56:52.172932 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxgk6\" (UniqueName: \"kubernetes.io/projected/ac418822-ffae-4170-9759-4f9e465b489b-kube-api-access-sxgk6\") on node \"master-0\" DevicePath \"\"" Mar 09 16:56:52.410053 master-0 kubenswrapper[32968]: I0309 16:56:52.409975 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" event={"ID":"6e9b9776-457d-4dfb-815b-efbb16a07518","Type":"ContainerStarted","Data":"9f68d5ccbcdf6e350eff7f30fcbce4048195d46db7af5b809b6ee19a5e411bed"} Mar 09 16:56:52.413324 master-0 kubenswrapper[32968]: I0309 16:56:52.413246 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" event={"ID":"ac418822-ffae-4170-9759-4f9e465b489b","Type":"ContainerDied","Data":"e3fca4688f5ec8b6854178a1120314cd36d88c0bfb6f6c1f3a60dcc757de6ce5"} Mar 09 16:56:52.413324 master-0 kubenswrapper[32968]: I0309 16:56:52.413300 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08w68n2" Mar 09 16:56:52.413324 master-0 kubenswrapper[32968]: I0309 16:56:52.413323 32968 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3fca4688f5ec8b6854178a1120314cd36d88c0bfb6f6c1f3a60dcc757de6ce5" Mar 09 16:56:52.721347 master-0 kubenswrapper[32968]: I0309 16:56:52.721268 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p"] Mar 09 16:56:52.748868 master-0 kubenswrapper[32968]: W0309 16:56:52.748811 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod471148f9_0403_4178_b482_9db02c8bf893.slice/crio-5871e710ff71e7ed55b8d93e759874fd502cfeb7e34af47771ebe88701e0440b WatchSource:0}: Error finding container 5871e710ff71e7ed55b8d93e759874fd502cfeb7e34af47771ebe88701e0440b: Status 404 returned error can't find the container with id 5871e710ff71e7ed55b8d93e759874fd502cfeb7e34af47771ebe88701e0440b Mar 09 16:56:53.424459 master-0 kubenswrapper[32968]: I0309 16:56:53.424372 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" event={"ID":"471148f9-0403-4178-b482-9db02c8bf893","Type":"ContainerStarted","Data":"5871e710ff71e7ed55b8d93e759874fd502cfeb7e34af47771ebe88701e0440b"} Mar 09 16:56:55.582027 master-0 kubenswrapper[32968]: I0309 16:56:55.581937 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-v9tt6" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: I0309 16:56:59.766721 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-2g4f7"] Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: E0309 16:56:59.767234 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="extract" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: I0309 16:56:59.767276 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="extract" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: E0309 16:56:59.767313 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="pull" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: I0309 16:56:59.767323 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="pull" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: E0309 16:56:59.767363 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="util" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: I0309 16:56:59.767373 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="util" Mar 09 16:56:59.768375 master-0 kubenswrapper[32968]: I0309 16:56:59.767707 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac418822-ffae-4170-9759-4f9e465b489b" containerName="extract" Mar 09 16:56:59.780828 master-0 kubenswrapper[32968]: I0309 16:56:59.780739 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-2g4f7"] Mar 09 16:56:59.781037 master-0 kubenswrapper[32968]: I0309 16:56:59.780887 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:56:59.871550 master-0 kubenswrapper[32968]: I0309 16:56:59.871453 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bsf9\" (UniqueName: \"kubernetes.io/projected/523edf26-a655-4c5b-ae03-0e9e5ebd9ef7-kube-api-access-6bsf9\") pod \"cert-manager-545d4d4674-2g4f7\" (UID: \"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7\") " pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:56:59.872414 master-0 kubenswrapper[32968]: I0309 16:56:59.872327 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523edf26-a655-4c5b-ae03-0e9e5ebd9ef7-bound-sa-token\") pod \"cert-manager-545d4d4674-2g4f7\" (UID: \"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7\") " pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:56:59.981641 master-0 kubenswrapper[32968]: I0309 16:56:59.976513 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523edf26-a655-4c5b-ae03-0e9e5ebd9ef7-bound-sa-token\") pod \"cert-manager-545d4d4674-2g4f7\" (UID: \"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7\") " pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:56:59.981641 master-0 kubenswrapper[32968]: I0309 16:56:59.976577 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bsf9\" (UniqueName: \"kubernetes.io/projected/523edf26-a655-4c5b-ae03-0e9e5ebd9ef7-kube-api-access-6bsf9\") pod \"cert-manager-545d4d4674-2g4f7\" (UID: \"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7\") " pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:56:59.999392 master-0 kubenswrapper[32968]: I0309 16:56:59.999245 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bsf9\" (UniqueName: \"kubernetes.io/projected/523edf26-a655-4c5b-ae03-0e9e5ebd9ef7-kube-api-access-6bsf9\") pod \"cert-manager-545d4d4674-2g4f7\" (UID: \"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7\") " pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:57:00.003749 master-0 kubenswrapper[32968]: I0309 16:57:00.003677 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/523edf26-a655-4c5b-ae03-0e9e5ebd9ef7-bound-sa-token\") pod \"cert-manager-545d4d4674-2g4f7\" (UID: \"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7\") " pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:57:00.117293 master-0 kubenswrapper[32968]: I0309 16:57:00.117139 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-2g4f7" Mar 09 16:57:01.969823 master-0 kubenswrapper[32968]: I0309 16:57:01.969209 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-2g4f7"] Mar 09 16:57:02.583928 master-0 kubenswrapper[32968]: I0309 16:57:02.583856 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" event={"ID":"471148f9-0403-4178-b482-9db02c8bf893","Type":"ContainerStarted","Data":"1da41a7b235ab7ba4b894fa97759fa41531e091b9f276c9ab4993777e41e09e6"} Mar 09 16:57:02.584264 master-0 kubenswrapper[32968]: I0309 16:57:02.583977 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:57:02.586730 master-0 kubenswrapper[32968]: I0309 16:57:02.586665 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-2g4f7" event={"ID":"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7","Type":"ContainerStarted","Data":"162dd8a86e931f651e50ade6d90f5c9e5103e9318f0ea54d9d437f94d42321d4"} Mar 09 16:57:02.586811 master-0 kubenswrapper[32968]: I0309 16:57:02.586745 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-2g4f7" event={"ID":"523edf26-a655-4c5b-ae03-0e9e5ebd9ef7","Type":"ContainerStarted","Data":"3e4b9abe9bc41d893f9f004bcc1750649457de43b41ccd4a5a09705290ed936b"} Mar 09 16:57:02.589625 master-0 kubenswrapper[32968]: I0309 16:57:02.589589 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" event={"ID":"6e9b9776-457d-4dfb-815b-efbb16a07518","Type":"ContainerStarted","Data":"f37aa83d418ca5c5356c13652b279042ed81139567cf537f55538eea5620f6d5"} Mar 09 16:57:02.589848 master-0 kubenswrapper[32968]: I0309 16:57:02.589801 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:57:02.621158 master-0 kubenswrapper[32968]: I0309 16:57:02.621068 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" podStartSLOduration=3.016257669 podStartE2EDuration="11.621048451s" podCreationTimestamp="2026-03-09 16:56:51 +0000 UTC" firstStartedPulling="2026-03-09 16:56:52.756615523 +0000 UTC m=+638.859938063" lastFinishedPulling="2026-03-09 16:57:01.361406305 +0000 UTC m=+647.464728845" observedRunningTime="2026-03-09 16:57:02.61408004 +0000 UTC m=+648.717402580" watchObservedRunningTime="2026-03-09 16:57:02.621048451 +0000 UTC m=+648.724370991" Mar 09 16:57:02.665178 master-0 kubenswrapper[32968]: I0309 16:57:02.665079 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" podStartSLOduration=3.1402565940000002 podStartE2EDuration="12.665054389s" podCreationTimestamp="2026-03-09 16:56:50 +0000 UTC" firstStartedPulling="2026-03-09 16:56:51.804387425 +0000 UTC m=+637.907709965" lastFinishedPulling="2026-03-09 16:57:01.32918522 +0000 UTC m=+647.432507760" observedRunningTime="2026-03-09 16:57:02.662694324 +0000 UTC m=+648.766016864" watchObservedRunningTime="2026-03-09 16:57:02.665054389 +0000 UTC m=+648.768376929" Mar 09 16:57:02.881929 master-0 kubenswrapper[32968]: I0309 16:57:02.881694 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-2g4f7" podStartSLOduration=3.8816552460000002 podStartE2EDuration="3.881655246s" podCreationTimestamp="2026-03-09 16:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:57:02.879136727 +0000 UTC m=+648.982459347" watchObservedRunningTime="2026-03-09 16:57:02.881655246 +0000 UTC m=+648.984977786" Mar 09 16:57:09.787164 master-0 kubenswrapper[32968]: I0309 16:57:09.787067 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7"] Mar 09 16:57:09.788394 master-0 kubenswrapper[32968]: I0309 16:57:09.788354 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" Mar 09 16:57:09.793658 master-0 kubenswrapper[32968]: I0309 16:57:09.793573 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 09 16:57:09.794187 master-0 kubenswrapper[32968]: I0309 16:57:09.794136 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 09 16:57:09.805058 master-0 kubenswrapper[32968]: I0309 16:57:09.805004 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7"] Mar 09 16:57:09.875263 master-0 kubenswrapper[32968]: I0309 16:57:09.875208 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgqf\" (UniqueName: \"kubernetes.io/projected/13a135cc-bb42-4474-9f5b-53525ba032c7-kube-api-access-kmgqf\") pod \"obo-prometheus-operator-68bc856cb9-ffkk7\" (UID: \"13a135cc-bb42-4474-9f5b-53525ba032c7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" Mar 09 16:57:09.937887 master-0 kubenswrapper[32968]: I0309 16:57:09.937784 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9"] Mar 09 16:57:09.939199 master-0 kubenswrapper[32968]: I0309 16:57:09.939167 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:09.944964 master-0 kubenswrapper[32968]: I0309 16:57:09.944912 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 09 16:57:09.963636 master-0 kubenswrapper[32968]: I0309 16:57:09.962885 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9"] Mar 09 16:57:09.965796 master-0 kubenswrapper[32968]: I0309 16:57:09.964344 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:09.980279 master-0 kubenswrapper[32968]: I0309 16:57:09.980190 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9"] Mar 09 16:57:09.984487 master-0 kubenswrapper[32968]: I0309 16:57:09.984003 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgqf\" (UniqueName: \"kubernetes.io/projected/13a135cc-bb42-4474-9f5b-53525ba032c7-kube-api-access-kmgqf\") pod \"obo-prometheus-operator-68bc856cb9-ffkk7\" (UID: \"13a135cc-bb42-4474-9f5b-53525ba032c7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" Mar 09 16:57:10.014179 master-0 kubenswrapper[32968]: I0309 16:57:10.014098 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9"] Mar 09 16:57:10.025267 master-0 kubenswrapper[32968]: I0309 16:57:10.025204 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgqf\" (UniqueName: \"kubernetes.io/projected/13a135cc-bb42-4474-9f5b-53525ba032c7-kube-api-access-kmgqf\") pod \"obo-prometheus-operator-68bc856cb9-ffkk7\" (UID: \"13a135cc-bb42-4474-9f5b-53525ba032c7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" Mar 09 16:57:10.089445 master-0 kubenswrapper[32968]: I0309 16:57:10.088063 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/814f8331-d4af-45a1-a75c-77c152a08f6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9\" (UID: \"814f8331-d4af-45a1-a75c-77c152a08f6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.089445 master-0 kubenswrapper[32968]: I0309 16:57:10.088161 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c826a8d-0b9f-472c-9a8a-55abbd01f55b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9\" (UID: \"0c826a8d-0b9f-472c-9a8a-55abbd01f55b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.089445 master-0 kubenswrapper[32968]: I0309 16:57:10.088192 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c826a8d-0b9f-472c-9a8a-55abbd01f55b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9\" (UID: \"0c826a8d-0b9f-472c-9a8a-55abbd01f55b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.089445 master-0 kubenswrapper[32968]: I0309 16:57:10.088302 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/814f8331-d4af-45a1-a75c-77c152a08f6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9\" (UID: \"814f8331-d4af-45a1-a75c-77c152a08f6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.116896 master-0 kubenswrapper[32968]: I0309 16:57:10.108793 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" Mar 09 16:57:10.206444 master-0 kubenswrapper[32968]: I0309 16:57:10.199081 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/814f8331-d4af-45a1-a75c-77c152a08f6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9\" (UID: \"814f8331-d4af-45a1-a75c-77c152a08f6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.206444 master-0 kubenswrapper[32968]: I0309 16:57:10.199299 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/814f8331-d4af-45a1-a75c-77c152a08f6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9\" (UID: \"814f8331-d4af-45a1-a75c-77c152a08f6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.206444 master-0 kubenswrapper[32968]: I0309 16:57:10.199370 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c826a8d-0b9f-472c-9a8a-55abbd01f55b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9\" (UID: \"0c826a8d-0b9f-472c-9a8a-55abbd01f55b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.206444 master-0 kubenswrapper[32968]: I0309 16:57:10.199409 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c826a8d-0b9f-472c-9a8a-55abbd01f55b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9\" (UID: \"0c826a8d-0b9f-472c-9a8a-55abbd01f55b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.217445 master-0 kubenswrapper[32968]: I0309 16:57:10.210599 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-sbwv6"] Mar 09 16:57:10.225696 master-0 kubenswrapper[32968]: I0309 16:57:10.220376 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.245451 master-0 kubenswrapper[32968]: I0309 16:57:10.240611 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c826a8d-0b9f-472c-9a8a-55abbd01f55b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9\" (UID: \"0c826a8d-0b9f-472c-9a8a-55abbd01f55b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.245451 master-0 kubenswrapper[32968]: I0309 16:57:10.240605 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c826a8d-0b9f-472c-9a8a-55abbd01f55b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9\" (UID: \"0c826a8d-0b9f-472c-9a8a-55abbd01f55b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.245451 master-0 kubenswrapper[32968]: I0309 16:57:10.240692 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 09 16:57:10.245451 master-0 kubenswrapper[32968]: I0309 16:57:10.240730 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/814f8331-d4af-45a1-a75c-77c152a08f6e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9\" (UID: \"814f8331-d4af-45a1-a75c-77c152a08f6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.245451 master-0 kubenswrapper[32968]: I0309 16:57:10.240994 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/814f8331-d4af-45a1-a75c-77c152a08f6e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9\" (UID: \"814f8331-d4af-45a1-a75c-77c152a08f6e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.292449 master-0 kubenswrapper[32968]: I0309 16:57:10.281337 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" Mar 09 16:57:10.312453 master-0 kubenswrapper[32968]: I0309 16:57:10.300995 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-sbwv6"] Mar 09 16:57:10.317865 master-0 kubenswrapper[32968]: I0309 16:57:10.315109 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" Mar 09 16:57:10.443909 master-0 kubenswrapper[32968]: I0309 16:57:10.419727 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cxxb\" (UniqueName: \"kubernetes.io/projected/6eecbb3a-7475-4f18-a1db-8bc4f4163a97-kube-api-access-8cxxb\") pod \"observability-operator-59bdc8b94-sbwv6\" (UID: \"6eecbb3a-7475-4f18-a1db-8bc4f4163a97\") " pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.443909 master-0 kubenswrapper[32968]: I0309 16:57:10.420070 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eecbb3a-7475-4f18-a1db-8bc4f4163a97-observability-operator-tls\") pod \"observability-operator-59bdc8b94-sbwv6\" (UID: \"6eecbb3a-7475-4f18-a1db-8bc4f4163a97\") " pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.522461 master-0 kubenswrapper[32968]: I0309 16:57:10.520487 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r2dfb"] Mar 09 16:57:10.522461 master-0 kubenswrapper[32968]: I0309 16:57:10.521898 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.528172 master-0 kubenswrapper[32968]: I0309 16:57:10.524723 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cxxb\" (UniqueName: \"kubernetes.io/projected/6eecbb3a-7475-4f18-a1db-8bc4f4163a97-kube-api-access-8cxxb\") pod \"observability-operator-59bdc8b94-sbwv6\" (UID: \"6eecbb3a-7475-4f18-a1db-8bc4f4163a97\") " pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.528172 master-0 kubenswrapper[32968]: I0309 16:57:10.524853 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eecbb3a-7475-4f18-a1db-8bc4f4163a97-observability-operator-tls\") pod \"observability-operator-59bdc8b94-sbwv6\" (UID: \"6eecbb3a-7475-4f18-a1db-8bc4f4163a97\") " pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.540521 master-0 kubenswrapper[32968]: I0309 16:57:10.532861 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r2dfb"] Mar 09 16:57:10.562799 master-0 kubenswrapper[32968]: I0309 16:57:10.544169 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6eecbb3a-7475-4f18-a1db-8bc4f4163a97-observability-operator-tls\") pod \"observability-operator-59bdc8b94-sbwv6\" (UID: \"6eecbb3a-7475-4f18-a1db-8bc4f4163a97\") " pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.597511 master-0 kubenswrapper[32968]: I0309 16:57:10.570859 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cxxb\" (UniqueName: \"kubernetes.io/projected/6eecbb3a-7475-4f18-a1db-8bc4f4163a97-kube-api-access-8cxxb\") pod \"observability-operator-59bdc8b94-sbwv6\" (UID: \"6eecbb3a-7475-4f18-a1db-8bc4f4163a97\") " pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.627379 master-0 kubenswrapper[32968]: I0309 16:57:10.627246 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb6baf87-eba1-4d4f-856d-8aed4674a10d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r2dfb\" (UID: \"fb6baf87-eba1-4d4f-856d-8aed4674a10d\") " pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.635485 master-0 kubenswrapper[32968]: I0309 16:57:10.632220 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbdx\" (UniqueName: \"kubernetes.io/projected/fb6baf87-eba1-4d4f-856d-8aed4674a10d-kube-api-access-pcbdx\") pod \"perses-operator-5bf474d74f-r2dfb\" (UID: \"fb6baf87-eba1-4d4f-856d-8aed4674a10d\") " pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.644146 master-0 kubenswrapper[32968]: I0309 16:57:10.643593 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:10.751458 master-0 kubenswrapper[32968]: I0309 16:57:10.751007 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbdx\" (UniqueName: \"kubernetes.io/projected/fb6baf87-eba1-4d4f-856d-8aed4674a10d-kube-api-access-pcbdx\") pod \"perses-operator-5bf474d74f-r2dfb\" (UID: \"fb6baf87-eba1-4d4f-856d-8aed4674a10d\") " pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.751458 master-0 kubenswrapper[32968]: I0309 16:57:10.751111 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb6baf87-eba1-4d4f-856d-8aed4674a10d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r2dfb\" (UID: \"fb6baf87-eba1-4d4f-856d-8aed4674a10d\") " pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.767241 master-0 kubenswrapper[32968]: I0309 16:57:10.765415 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb6baf87-eba1-4d4f-856d-8aed4674a10d-openshift-service-ca\") pod \"perses-operator-5bf474d74f-r2dfb\" (UID: \"fb6baf87-eba1-4d4f-856d-8aed4674a10d\") " pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.827304 master-0 kubenswrapper[32968]: I0309 16:57:10.827187 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbdx\" (UniqueName: \"kubernetes.io/projected/fb6baf87-eba1-4d4f-856d-8aed4674a10d-kube-api-access-pcbdx\") pod \"perses-operator-5bf474d74f-r2dfb\" (UID: \"fb6baf87-eba1-4d4f-856d-8aed4674a10d\") " pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:10.843827 master-0 kubenswrapper[32968]: I0309 16:57:10.842037 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7"] Mar 09 16:57:10.858215 master-0 kubenswrapper[32968]: I0309 16:57:10.857629 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:11.207315 master-0 kubenswrapper[32968]: I0309 16:57:11.207202 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9"] Mar 09 16:57:11.208724 master-0 kubenswrapper[32968]: W0309 16:57:11.208459 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c826a8d_0b9f_472c_9a8a_55abbd01f55b.slice/crio-b239583f8e76f6c7202e822fffb0e910c98dce8a38bf0010d81607c894c18174 WatchSource:0}: Error finding container b239583f8e76f6c7202e822fffb0e910c98dce8a38bf0010d81607c894c18174: Status 404 returned error can't find the container with id b239583f8e76f6c7202e822fffb0e910c98dce8a38bf0010d81607c894c18174 Mar 09 16:57:11.309642 master-0 kubenswrapper[32968]: I0309 16:57:11.308551 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9"] Mar 09 16:57:11.338540 master-0 kubenswrapper[32968]: W0309 16:57:11.334606 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814f8331_d4af_45a1_a75c_77c152a08f6e.slice/crio-57e6dcaec4dea0819353b310b3a37cfc8517fa5203accbdc37825f4f4d668fcf WatchSource:0}: Error finding container 57e6dcaec4dea0819353b310b3a37cfc8517fa5203accbdc37825f4f4d668fcf: Status 404 returned error can't find the container with id 57e6dcaec4dea0819353b310b3a37cfc8517fa5203accbdc37825f4f4d668fcf Mar 09 16:57:11.465876 master-0 kubenswrapper[32968]: I0309 16:57:11.463666 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-r2dfb"] Mar 09 16:57:11.478257 master-0 kubenswrapper[32968]: W0309 16:57:11.477819 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6baf87_eba1_4d4f_856d_8aed4674a10d.slice/crio-73581be81ce43aa41aa42310cf54579f9120999b4a95457f09ca2ee475341e36 WatchSource:0}: Error finding container 73581be81ce43aa41aa42310cf54579f9120999b4a95457f09ca2ee475341e36: Status 404 returned error can't find the container with id 73581be81ce43aa41aa42310cf54579f9120999b4a95457f09ca2ee475341e36 Mar 09 16:57:11.591469 master-0 kubenswrapper[32968]: I0309 16:57:11.587715 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-sbwv6"] Mar 09 16:57:11.734998 master-0 kubenswrapper[32968]: I0309 16:57:11.734732 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" event={"ID":"6eecbb3a-7475-4f18-a1db-8bc4f4163a97","Type":"ContainerStarted","Data":"c92d51ac36cb4d6587836201a0387b41d12d2278b5c5599dfe77e9f042d32adf"} Mar 09 16:57:11.736360 master-0 kubenswrapper[32968]: I0309 16:57:11.736288 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" event={"ID":"0c826a8d-0b9f-472c-9a8a-55abbd01f55b","Type":"ContainerStarted","Data":"b239583f8e76f6c7202e822fffb0e910c98dce8a38bf0010d81607c894c18174"} Mar 09 16:57:11.738486 master-0 kubenswrapper[32968]: I0309 16:57:11.738442 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" event={"ID":"fb6baf87-eba1-4d4f-856d-8aed4674a10d","Type":"ContainerStarted","Data":"73581be81ce43aa41aa42310cf54579f9120999b4a95457f09ca2ee475341e36"} Mar 09 16:57:11.740611 master-0 kubenswrapper[32968]: I0309 16:57:11.740540 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" event={"ID":"814f8331-d4af-45a1-a75c-77c152a08f6e","Type":"ContainerStarted","Data":"57e6dcaec4dea0819353b310b3a37cfc8517fa5203accbdc37825f4f4d668fcf"} Mar 09 16:57:11.741926 master-0 kubenswrapper[32968]: I0309 16:57:11.741879 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" event={"ID":"13a135cc-bb42-4474-9f5b-53525ba032c7","Type":"ContainerStarted","Data":"dc592899589346c94f019ce6dc00f2fa5933869a58ebd79131e61314d9438525"} Mar 09 16:57:12.182513 master-0 kubenswrapper[32968]: I0309 16:57:12.180875 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-89fbb7654-4rp6p" Mar 09 16:57:25.963495 master-0 kubenswrapper[32968]: I0309 16:57:25.963411 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" event={"ID":"13a135cc-bb42-4474-9f5b-53525ba032c7","Type":"ContainerStarted","Data":"64cb021dafebe9cf08c40f68e9b79a7c9bd98e5d7ec878a53ee0705b7bcd886e"} Mar 09 16:57:25.967760 master-0 kubenswrapper[32968]: I0309 16:57:25.967716 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" event={"ID":"6eecbb3a-7475-4f18-a1db-8bc4f4163a97","Type":"ContainerStarted","Data":"93521f401281b92b6fa1d678459cab49b3a04caa2b95faa20a7f957a74b43703"} Mar 09 16:57:25.968135 master-0 kubenswrapper[32968]: I0309 16:57:25.968074 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:25.970531 master-0 kubenswrapper[32968]: I0309 16:57:25.970467 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" event={"ID":"0c826a8d-0b9f-472c-9a8a-55abbd01f55b","Type":"ContainerStarted","Data":"ac9769ba3b111a30ada7df5a28e4758d46fd96bc3ddff338dc652b4caad8a51b"} Mar 09 16:57:25.973245 master-0 kubenswrapper[32968]: I0309 16:57:25.973177 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" event={"ID":"fb6baf87-eba1-4d4f-856d-8aed4674a10d","Type":"ContainerStarted","Data":"4771b8ffe120dcbd6c252e276cc0bfe359687ee309a90fdff1990a7eab6caa95"} Mar 09 16:57:25.973363 master-0 kubenswrapper[32968]: I0309 16:57:25.973272 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:25.976253 master-0 kubenswrapper[32968]: I0309 16:57:25.975673 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" event={"ID":"814f8331-d4af-45a1-a75c-77c152a08f6e","Type":"ContainerStarted","Data":"d71a4494635f44d9d5a71a6b0471eef829a93788dcaeb037bf5c6da107edc0e1"} Mar 09 16:57:26.032337 master-0 kubenswrapper[32968]: I0309 16:57:26.032256 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" Mar 09 16:57:26.641601 master-0 kubenswrapper[32968]: I0309 16:57:26.641466 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffkk7" podStartSLOduration=3.819355683 podStartE2EDuration="17.641406589s" podCreationTimestamp="2026-03-09 16:57:09 +0000 UTC" firstStartedPulling="2026-03-09 16:57:10.845311842 +0000 UTC m=+656.948634382" lastFinishedPulling="2026-03-09 16:57:24.667362748 +0000 UTC m=+670.770685288" observedRunningTime="2026-03-09 16:57:26.628529345 +0000 UTC m=+672.731851895" watchObservedRunningTime="2026-03-09 16:57:26.641406589 +0000 UTC m=+672.744729129" Mar 09 16:57:26.686386 master-0 kubenswrapper[32968]: I0309 16:57:26.686250 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" podStartSLOduration=3.499925619 podStartE2EDuration="16.686208269s" podCreationTimestamp="2026-03-09 16:57:10 +0000 UTC" firstStartedPulling="2026-03-09 16:57:11.484106581 +0000 UTC m=+657.587429121" lastFinishedPulling="2026-03-09 16:57:24.670389231 +0000 UTC m=+670.773711771" observedRunningTime="2026-03-09 16:57:26.678395125 +0000 UTC m=+672.781717665" watchObservedRunningTime="2026-03-09 16:57:26.686208269 +0000 UTC m=+672.789530819" Mar 09 16:57:26.782758 master-0 kubenswrapper[32968]: I0309 16:57:26.782643 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-rsdh9" podStartSLOduration=4.32773959 podStartE2EDuration="17.782627366s" podCreationTimestamp="2026-03-09 16:57:09 +0000 UTC" firstStartedPulling="2026-03-09 16:57:11.211351892 +0000 UTC m=+657.314674432" lastFinishedPulling="2026-03-09 16:57:24.666239668 +0000 UTC m=+670.769562208" observedRunningTime="2026-03-09 16:57:26.720016498 +0000 UTC m=+672.823339038" watchObservedRunningTime="2026-03-09 16:57:26.782627366 +0000 UTC m=+672.885949896" Mar 09 16:57:26.784459 master-0 kubenswrapper[32968]: I0309 16:57:26.784214 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-sbwv6" podStartSLOduration=3.666690338 podStartE2EDuration="16.7842079s" podCreationTimestamp="2026-03-09 16:57:10 +0000 UTC" firstStartedPulling="2026-03-09 16:57:11.595113929 +0000 UTC m=+657.698436469" lastFinishedPulling="2026-03-09 16:57:24.712631501 +0000 UTC m=+670.815954031" observedRunningTime="2026-03-09 16:57:26.779793429 +0000 UTC m=+672.883115969" watchObservedRunningTime="2026-03-09 16:57:26.7842079 +0000 UTC m=+672.887530440" Mar 09 16:57:26.860473 master-0 kubenswrapper[32968]: I0309 16:57:26.855999 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d4d9d8967-nknq9" podStartSLOduration=4.503717953 podStartE2EDuration="17.85595907s" podCreationTimestamp="2026-03-09 16:57:09 +0000 UTC" firstStartedPulling="2026-03-09 16:57:11.343451249 +0000 UTC m=+657.446773789" lastFinishedPulling="2026-03-09 16:57:24.695692366 +0000 UTC m=+670.799014906" observedRunningTime="2026-03-09 16:57:26.848051462 +0000 UTC m=+672.951374002" watchObservedRunningTime="2026-03-09 16:57:26.85595907 +0000 UTC m=+672.959281620" Mar 09 16:57:30.863905 master-0 kubenswrapper[32968]: I0309 16:57:30.863803 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-r2dfb" Mar 09 16:57:31.159646 master-0 kubenswrapper[32968]: I0309 16:57:31.159391 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-786c9ddc85-wqcbt" Mar 09 16:57:40.050457 master-0 kubenswrapper[32968]: I0309 16:57:40.049506 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-l6qtv"] Mar 09 16:57:40.056309 master-0 kubenswrapper[32968]: I0309 16:57:40.053518 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.061337 master-0 kubenswrapper[32968]: I0309 16:57:40.059188 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 09 16:57:40.061337 master-0 kubenswrapper[32968]: I0309 16:57:40.059864 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 09 16:57:40.061821 master-0 kubenswrapper[32968]: I0309 16:57:40.061518 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh"] Mar 09 16:57:40.065451 master-0 kubenswrapper[32968]: I0309 16:57:40.063029 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.070461 master-0 kubenswrapper[32968]: I0309 16:57:40.068165 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 09 16:57:40.078451 master-0 kubenswrapper[32968]: I0309 16:57:40.076080 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh"] Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102625 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-startup\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102703 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-metrics\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102733 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-reloader\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102781 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-conf\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102824 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnrlv\" (UniqueName: \"kubernetes.io/projected/44941c43-d5f0-4167-8d05-26eb247a9430-kube-api-access-wnrlv\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102882 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-sockets\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102916 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fa28417-8000-47c2-832c-a1f3558a8a11-metrics-certs\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102948 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44941c43-d5f0-4167-8d05-26eb247a9430-cert\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.106167 master-0 kubenswrapper[32968]: I0309 16:57:40.102988 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2qbm\" (UniqueName: \"kubernetes.io/projected/5fa28417-8000-47c2-832c-a1f3558a8a11-kube-api-access-g2qbm\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.204502 master-0 kubenswrapper[32968]: I0309 16:57:40.204384 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-reloader\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.204928 master-0 kubenswrapper[32968]: I0309 16:57:40.204593 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-conf\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.204928 master-0 kubenswrapper[32968]: I0309 16:57:40.204661 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnrlv\" (UniqueName: \"kubernetes.io/projected/44941c43-d5f0-4167-8d05-26eb247a9430-kube-api-access-wnrlv\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.204928 master-0 kubenswrapper[32968]: I0309 16:57:40.204752 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-sockets\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.204928 master-0 kubenswrapper[32968]: I0309 16:57:40.204801 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fa28417-8000-47c2-832c-a1f3558a8a11-metrics-certs\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.204928 master-0 kubenswrapper[32968]: I0309 16:57:40.204828 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44941c43-d5f0-4167-8d05-26eb247a9430-cert\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.204928 master-0 kubenswrapper[32968]: I0309 16:57:40.204911 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2qbm\" (UniqueName: \"kubernetes.io/projected/5fa28417-8000-47c2-832c-a1f3558a8a11-kube-api-access-g2qbm\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.205218 master-0 kubenswrapper[32968]: I0309 16:57:40.204966 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-startup\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.205218 master-0 kubenswrapper[32968]: I0309 16:57:40.205016 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-metrics\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.205967 master-0 kubenswrapper[32968]: I0309 16:57:40.205930 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-reloader\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.207286 master-0 kubenswrapper[32968]: I0309 16:57:40.207247 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-sockets\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.208125 master-0 kubenswrapper[32968]: E0309 16:57:40.208103 32968 secret.go:189] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 09 16:57:40.208289 master-0 kubenswrapper[32968]: E0309 16:57:40.208271 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44941c43-d5f0-4167-8d05-26eb247a9430-cert podName:44941c43-d5f0-4167-8d05-26eb247a9430 nodeName:}" failed. No retries permitted until 2026-03-09 16:57:40.70824356 +0000 UTC m=+686.811566100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44941c43-d5f0-4167-8d05-26eb247a9430-cert") pod "frr-k8s-webhook-server-7f989f654f-plwjh" (UID: "44941c43-d5f0-4167-8d05-26eb247a9430") : secret "frr-k8s-webhook-server-cert" not found Mar 09 16:57:40.216097 master-0 kubenswrapper[32968]: I0309 16:57:40.216009 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-conf\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.217111 master-0 kubenswrapper[32968]: I0309 16:57:40.216277 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fa28417-8000-47c2-832c-a1f3558a8a11-metrics-certs\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.217225 master-0 kubenswrapper[32968]: I0309 16:57:40.216556 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5fa28417-8000-47c2-832c-a1f3558a8a11-metrics\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.217225 master-0 kubenswrapper[32968]: I0309 16:57:40.217067 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5fa28417-8000-47c2-832c-a1f3558a8a11-frr-startup\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.267884 master-0 kubenswrapper[32968]: I0309 16:57:40.267805 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2qbm\" (UniqueName: \"kubernetes.io/projected/5fa28417-8000-47c2-832c-a1f3558a8a11-kube-api-access-g2qbm\") pod \"frr-k8s-l6qtv\" (UID: \"5fa28417-8000-47c2-832c-a1f3558a8a11\") " pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.284914 master-0 kubenswrapper[32968]: I0309 16:57:40.284848 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnrlv\" (UniqueName: \"kubernetes.io/projected/44941c43-d5f0-4167-8d05-26eb247a9430-kube-api-access-wnrlv\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.394765 master-0 kubenswrapper[32968]: I0309 16:57:40.394489 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-s7jgn"] Mar 09 16:57:40.396261 master-0 kubenswrapper[32968]: I0309 16:57:40.396217 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.397709 master-0 kubenswrapper[32968]: I0309 16:57:40.397675 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:40.406823 master-0 kubenswrapper[32968]: I0309 16:57:40.406764 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 09 16:57:40.407045 master-0 kubenswrapper[32968]: I0309 16:57:40.407025 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 09 16:57:40.407214 master-0 kubenswrapper[32968]: I0309 16:57:40.407185 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 09 16:57:40.415462 master-0 kubenswrapper[32968]: I0309 16:57:40.408716 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-hkczp"] Mar 09 16:57:40.427643 master-0 kubenswrapper[32968]: I0309 16:57:40.423555 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.429782 master-0 kubenswrapper[32968]: I0309 16:57:40.429722 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 09 16:57:40.455090 master-0 kubenswrapper[32968]: I0309 16:57:40.455012 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-hkczp"] Mar 09 16:57:40.515552 master-0 kubenswrapper[32968]: I0309 16:57:40.515479 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-metrics-certs\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.515552 master-0 kubenswrapper[32968]: I0309 16:57:40.515559 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f7841825-700b-4d13-9d15-0d18c2e6f513-metallb-excludel2\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.515956 master-0 kubenswrapper[32968]: I0309 16:57:40.515728 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnbz\" (UniqueName: \"kubernetes.io/projected/9fc827cd-af79-4874-8388-5d92241874c6-kube-api-access-klnbz\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.515956 master-0 kubenswrapper[32968]: I0309 16:57:40.515809 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-cert\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.515956 master-0 kubenswrapper[32968]: I0309 16:57:40.515850 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.515956 master-0 kubenswrapper[32968]: I0309 16:57:40.515879 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwr5\" (UniqueName: \"kubernetes.io/projected/f7841825-700b-4d13-9d15-0d18c2e6f513-kube-api-access-pdwr5\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.515956 master-0 kubenswrapper[32968]: I0309 16:57:40.515909 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-metrics-certs\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.633390 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: E0309 16:57:40.633603 32968 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: E0309 16:57:40.633672 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist podName:f7841825-700b-4d13-9d15-0d18c2e6f513 nodeName:}" failed. No retries permitted until 2026-03-09 16:57:41.133649479 +0000 UTC m=+687.236972019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist") pod "speaker-s7jgn" (UID: "f7841825-700b-4d13-9d15-0d18c2e6f513") : secret "metallb-memberlist" not found Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.633969 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwr5\" (UniqueName: \"kubernetes.io/projected/f7841825-700b-4d13-9d15-0d18c2e6f513-kube-api-access-pdwr5\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.634033 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-metrics-certs\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.634099 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-metrics-certs\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.634135 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f7841825-700b-4d13-9d15-0d18c2e6f513-metallb-excludel2\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.634299 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klnbz\" (UniqueName: \"kubernetes.io/projected/9fc827cd-af79-4874-8388-5d92241874c6-kube-api-access-klnbz\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.634459 master-0 kubenswrapper[32968]: I0309 16:57:40.634358 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-cert\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.639503 master-0 kubenswrapper[32968]: I0309 16:57:40.637508 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f7841825-700b-4d13-9d15-0d18c2e6f513-metallb-excludel2\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.639503 master-0 kubenswrapper[32968]: E0309 16:57:40.637677 32968 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 09 16:57:40.639503 master-0 kubenswrapper[32968]: E0309 16:57:40.637734 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-metrics-certs podName:f7841825-700b-4d13-9d15-0d18c2e6f513 nodeName:}" failed. No retries permitted until 2026-03-09 16:57:41.137716021 +0000 UTC m=+687.241038561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-metrics-certs") pod "speaker-s7jgn" (UID: "f7841825-700b-4d13-9d15-0d18c2e6f513") : secret "speaker-certs-secret" not found Mar 09 16:57:40.639503 master-0 kubenswrapper[32968]: E0309 16:57:40.637907 32968 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 09 16:57:40.639503 master-0 kubenswrapper[32968]: E0309 16:57:40.637999 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-metrics-certs podName:9fc827cd-af79-4874-8388-5d92241874c6 nodeName:}" failed. No retries permitted until 2026-03-09 16:57:41.137973138 +0000 UTC m=+687.241295808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-metrics-certs") pod "controller-86ddb6bd46-hkczp" (UID: "9fc827cd-af79-4874-8388-5d92241874c6") : secret "controller-certs-secret" not found Mar 09 16:57:40.648924 master-0 kubenswrapper[32968]: I0309 16:57:40.648766 32968 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 09 16:57:40.653570 master-0 kubenswrapper[32968]: I0309 16:57:40.653364 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-cert\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.698631 master-0 kubenswrapper[32968]: I0309 16:57:40.698540 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klnbz\" (UniqueName: \"kubernetes.io/projected/9fc827cd-af79-4874-8388-5d92241874c6-kube-api-access-klnbz\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:40.720462 master-0 kubenswrapper[32968]: I0309 16:57:40.718469 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwr5\" (UniqueName: \"kubernetes.io/projected/f7841825-700b-4d13-9d15-0d18c2e6f513-kube-api-access-pdwr5\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:40.735902 master-0 kubenswrapper[32968]: I0309 16:57:40.735821 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44941c43-d5f0-4167-8d05-26eb247a9430-cert\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.740840 master-0 kubenswrapper[32968]: I0309 16:57:40.740803 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44941c43-d5f0-4167-8d05-26eb247a9430-cert\") pod \"frr-k8s-webhook-server-7f989f654f-plwjh\" (UID: \"44941c43-d5f0-4167-8d05-26eb247a9430\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:40.860142 master-0 kubenswrapper[32968]: I0309 16:57:40.860056 32968 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 16:57:41.032167 master-0 kubenswrapper[32968]: I0309 16:57:41.032085 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:41.131004 master-0 kubenswrapper[32968]: I0309 16:57:41.130926 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"d266e1e9e3f4dbdca70736f5316e64285056eb74aa54298b61be0cb7d7b6012e"} Mar 09 16:57:41.150782 master-0 kubenswrapper[32968]: I0309 16:57:41.150612 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:41.150782 master-0 kubenswrapper[32968]: I0309 16:57:41.150742 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-metrics-certs\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:41.151090 master-0 kubenswrapper[32968]: I0309 16:57:41.150803 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-metrics-certs\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:41.158063 master-0 kubenswrapper[32968]: E0309 16:57:41.153948 32968 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 09 16:57:41.158063 master-0 kubenswrapper[32968]: E0309 16:57:41.154097 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist podName:f7841825-700b-4d13-9d15-0d18c2e6f513 nodeName:}" failed. No retries permitted until 2026-03-09 16:57:42.154075878 +0000 UTC m=+688.257398428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist") pod "speaker-s7jgn" (UID: "f7841825-700b-4d13-9d15-0d18c2e6f513") : secret "metallb-memberlist" not found Mar 09 16:57:41.158063 master-0 kubenswrapper[32968]: I0309 16:57:41.156893 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-metrics-certs\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:41.160380 master-0 kubenswrapper[32968]: I0309 16:57:41.160335 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9fc827cd-af79-4874-8388-5d92241874c6-metrics-certs\") pod \"controller-86ddb6bd46-hkczp\" (UID: \"9fc827cd-af79-4874-8388-5d92241874c6\") " pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:41.221562 master-0 kubenswrapper[32968]: I0309 16:57:41.221387 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:41.614456 master-0 kubenswrapper[32968]: I0309 16:57:41.613694 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh"] Mar 09 16:57:41.627607 master-0 kubenswrapper[32968]: W0309 16:57:41.627530 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44941c43_d5f0_4167_8d05_26eb247a9430.slice/crio-da62ff35a33b3c62e92e3daefbe3401d93f774afbabda007c0df407b72ffdbe1 WatchSource:0}: Error finding container da62ff35a33b3c62e92e3daefbe3401d93f774afbabda007c0df407b72ffdbe1: Status 404 returned error can't find the container with id da62ff35a33b3c62e92e3daefbe3401d93f774afbabda007c0df407b72ffdbe1 Mar 09 16:57:41.822731 master-0 kubenswrapper[32968]: I0309 16:57:41.821686 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-hkczp"] Mar 09 16:57:41.832726 master-0 kubenswrapper[32968]: W0309 16:57:41.832630 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fc827cd_af79_4874_8388_5d92241874c6.slice/crio-a1c1b4085a06328555d46c88a318096c7962110fa8bf72915d9e702507948d39 WatchSource:0}: Error finding container a1c1b4085a06328555d46c88a318096c7962110fa8bf72915d9e702507948d39: Status 404 returned error can't find the container with id a1c1b4085a06328555d46c88a318096c7962110fa8bf72915d9e702507948d39 Mar 09 16:57:42.152407 master-0 kubenswrapper[32968]: I0309 16:57:42.152165 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" event={"ID":"44941c43-d5f0-4167-8d05-26eb247a9430","Type":"ContainerStarted","Data":"da62ff35a33b3c62e92e3daefbe3401d93f774afbabda007c0df407b72ffdbe1"} Mar 09 16:57:42.158130 master-0 kubenswrapper[32968]: I0309 16:57:42.158045 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-hkczp" event={"ID":"9fc827cd-af79-4874-8388-5d92241874c6","Type":"ContainerStarted","Data":"d17da6f84994c81c51a72a219f6c5408c4af6388b32329f9153e4ef6b9e7a062"} Mar 09 16:57:42.158697 master-0 kubenswrapper[32968]: I0309 16:57:42.158676 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-hkczp" event={"ID":"9fc827cd-af79-4874-8388-5d92241874c6","Type":"ContainerStarted","Data":"a1c1b4085a06328555d46c88a318096c7962110fa8bf72915d9e702507948d39"} Mar 09 16:57:42.178161 master-0 kubenswrapper[32968]: I0309 16:57:42.178061 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:42.185346 master-0 kubenswrapper[32968]: I0309 16:57:42.185094 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f7841825-700b-4d13-9d15-0d18c2e6f513-memberlist\") pod \"speaker-s7jgn\" (UID: \"f7841825-700b-4d13-9d15-0d18c2e6f513\") " pod="metallb-system/speaker-s7jgn" Mar 09 16:57:42.252313 master-0 kubenswrapper[32968]: I0309 16:57:42.252251 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-s7jgn" Mar 09 16:57:42.315603 master-0 kubenswrapper[32968]: W0309 16:57:42.315392 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7841825_700b_4d13_9d15_0d18c2e6f513.slice/crio-9defe45fef2baa7f76a00e889627abc814eadc1d51a5aee8d489effd4246036e WatchSource:0}: Error finding container 9defe45fef2baa7f76a00e889627abc814eadc1d51a5aee8d489effd4246036e: Status 404 returned error can't find the container with id 9defe45fef2baa7f76a00e889627abc814eadc1d51a5aee8d489effd4246036e Mar 09 16:57:42.462561 master-0 kubenswrapper[32968]: I0309 16:57:42.461685 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-24chs"] Mar 09 16:57:42.467490 master-0 kubenswrapper[32968]: I0309 16:57:42.465680 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" Mar 09 16:57:42.479762 master-0 kubenswrapper[32968]: I0309 16:57:42.479693 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t"] Mar 09 16:57:42.481258 master-0 kubenswrapper[32968]: I0309 16:57:42.481208 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:42.485549 master-0 kubenswrapper[32968]: I0309 16:57:42.484557 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 09 16:57:42.518467 master-0 kubenswrapper[32968]: I0309 16:57:42.518358 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-24chs"] Mar 09 16:57:42.557554 master-0 kubenswrapper[32968]: I0309 16:57:42.556493 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t"] Mar 09 16:57:42.575475 master-0 kubenswrapper[32968]: I0309 16:57:42.573667 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hvvlm"] Mar 09 16:57:42.579443 master-0 kubenswrapper[32968]: I0309 16:57:42.577877 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.593457 master-0 kubenswrapper[32968]: I0309 16:57:42.590130 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:42.593457 master-0 kubenswrapper[32968]: I0309 16:57:42.590207 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2kv\" (UniqueName: \"kubernetes.io/projected/64714b31-41b0-4495-a931-21f5a4e765b4-kube-api-access-7v2kv\") pod \"nmstate-metrics-69594cc75-24chs\" (UID: \"64714b31-41b0-4495-a931-21f5a4e765b4\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" Mar 09 16:57:42.593457 master-0 kubenswrapper[32968]: I0309 16:57:42.590261 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt975\" (UniqueName: \"kubernetes.io/projected/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-kube-api-access-pt975\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695290 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2kv\" (UniqueName: \"kubernetes.io/projected/64714b31-41b0-4495-a931-21f5a4e765b4-kube-api-access-7v2kv\") pod \"nmstate-metrics-69594cc75-24chs\" (UID: \"64714b31-41b0-4495-a931-21f5a4e765b4\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695362 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt975\" (UniqueName: \"kubernetes.io/projected/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-kube-api-access-pt975\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695548 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-ovs-socket\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695583 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-nmstate-lock\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695612 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-dbus-socket\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695636 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8lck\" (UniqueName: \"kubernetes.io/projected/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-kube-api-access-z8lck\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: I0309 16:57:42.695678 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: E0309 16:57:42.695863 32968 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 09 16:57:42.697749 master-0 kubenswrapper[32968]: E0309 16:57:42.695931 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-tls-key-pair podName:ec2d3d82-8d4d-43a9-830d-f186bd072b1d nodeName:}" failed. No retries permitted until 2026-03-09 16:57:43.195907593 +0000 UTC m=+689.299230133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-tls-key-pair") pod "nmstate-webhook-786f45cff4-qtg4t" (UID: "ec2d3d82-8d4d-43a9-830d-f186bd072b1d") : secret "openshift-nmstate-webhook" not found Mar 09 16:57:42.745514 master-0 kubenswrapper[32968]: I0309 16:57:42.742094 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2kv\" (UniqueName: \"kubernetes.io/projected/64714b31-41b0-4495-a931-21f5a4e765b4-kube-api-access-7v2kv\") pod \"nmstate-metrics-69594cc75-24chs\" (UID: \"64714b31-41b0-4495-a931-21f5a4e765b4\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" Mar 09 16:57:42.745514 master-0 kubenswrapper[32968]: I0309 16:57:42.744840 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt975\" (UniqueName: \"kubernetes.io/projected/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-kube-api-access-pt975\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:42.745514 master-0 kubenswrapper[32968]: I0309 16:57:42.744924 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68"] Mar 09 16:57:42.767487 master-0 kubenswrapper[32968]: I0309 16:57:42.767265 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68"] Mar 09 16:57:42.767487 master-0 kubenswrapper[32968]: I0309 16:57:42.767436 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.772467 master-0 kubenswrapper[32968]: I0309 16:57:42.772316 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 09 16:57:42.775465 master-0 kubenswrapper[32968]: I0309 16:57:42.773055 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.796978 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-ovs-socket\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797067 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-nmstate-lock\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797187 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-ovs-socket\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797284 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-dbus-socket\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797331 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8lck\" (UniqueName: \"kubernetes.io/projected/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-kube-api-access-z8lck\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797588 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797737 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.797838 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffh84\" (UniqueName: \"kubernetes.io/projected/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-kube-api-access-ffh84\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.798173 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-nmstate-lock\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.802448 master-0 kubenswrapper[32968]: I0309 16:57:42.798234 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-dbus-socket\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.825511 master-0 kubenswrapper[32968]: I0309 16:57:42.823296 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" Mar 09 16:57:42.846494 master-0 kubenswrapper[32968]: I0309 16:57:42.842801 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8lck\" (UniqueName: \"kubernetes.io/projected/c5cece49-bd58-4e1f-99aa-dce6e031c9f0-kube-api-access-z8lck\") pod \"nmstate-handler-hvvlm\" (UID: \"c5cece49-bd58-4e1f-99aa-dce6e031c9f0\") " pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:42.902466 master-0 kubenswrapper[32968]: I0309 16:57:42.900802 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.902466 master-0 kubenswrapper[32968]: I0309 16:57:42.900886 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.902466 master-0 kubenswrapper[32968]: I0309 16:57:42.900917 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffh84\" (UniqueName: \"kubernetes.io/projected/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-kube-api-access-ffh84\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.903935 master-0 kubenswrapper[32968]: I0309 16:57:42.903206 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.903935 master-0 kubenswrapper[32968]: E0309 16:57:42.903302 32968 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 09 16:57:42.903935 master-0 kubenswrapper[32968]: E0309 16:57:42.903347 32968 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-plugin-serving-cert podName:1cfc4434-2f72-43b7-8a57-4843c0eb4cf9 nodeName:}" failed. No retries permitted until 2026-03-09 16:57:43.403333738 +0000 UTC m=+689.506656278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-q6r68" (UID: "1cfc4434-2f72-43b7-8a57-4843c0eb4cf9") : secret "plugin-serving-cert" not found Mar 09 16:57:42.936573 master-0 kubenswrapper[32968]: I0309 16:57:42.933708 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffh84\" (UniqueName: \"kubernetes.io/projected/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-kube-api-access-ffh84\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:42.953468 master-0 kubenswrapper[32968]: I0309 16:57:42.951077 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:43.013899 master-0 kubenswrapper[32968]: I0309 16:57:43.011718 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-566b6d8768-r99qh"] Mar 09 16:57:43.013899 master-0 kubenswrapper[32968]: I0309 16:57:43.013573 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.035470 master-0 kubenswrapper[32968]: I0309 16:57:43.034600 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-566b6d8768-r99qh"] Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.107980 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81a22103-65ff-4882-9981-b52f2e431eb5-console-serving-cert\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.108134 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-service-ca\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.108276 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81a22103-65ff-4882-9981-b52f2e431eb5-console-oauth-config\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.108475 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-console-config\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.108532 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-trusted-ca-bundle\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.108551 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-oauth-serving-cert\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.110249 master-0 kubenswrapper[32968]: I0309 16:57:43.108652 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vn8\" (UniqueName: \"kubernetes.io/projected/81a22103-65ff-4882-9981-b52f2e431eb5-kube-api-access-n2vn8\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.182045 master-0 kubenswrapper[32968]: I0309 16:57:43.181984 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hvvlm" event={"ID":"c5cece49-bd58-4e1f-99aa-dce6e031c9f0","Type":"ContainerStarted","Data":"65f0783b15b8e06a804b2e4429741a7bd6ab3781c7211f88f6dba0dfe313c8e6"} Mar 09 16:57:43.189089 master-0 kubenswrapper[32968]: I0309 16:57:43.188924 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-s7jgn" event={"ID":"f7841825-700b-4d13-9d15-0d18c2e6f513","Type":"ContainerStarted","Data":"60b38056f1701a25518f4212451dd3663816f2aa14eb0170d4762c426c3cce8f"} Mar 09 16:57:43.189089 master-0 kubenswrapper[32968]: I0309 16:57:43.188995 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-s7jgn" event={"ID":"f7841825-700b-4d13-9d15-0d18c2e6f513","Type":"ContainerStarted","Data":"9defe45fef2baa7f76a00e889627abc814eadc1d51a5aee8d489effd4246036e"} Mar 09 16:57:43.210896 master-0 kubenswrapper[32968]: I0309 16:57:43.210827 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81a22103-65ff-4882-9981-b52f2e431eb5-console-serving-cert\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.210896 master-0 kubenswrapper[32968]: I0309 16:57:43.210902 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-service-ca\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.211244 master-0 kubenswrapper[32968]: I0309 16:57:43.211176 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81a22103-65ff-4882-9981-b52f2e431eb5-console-oauth-config\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.211543 master-0 kubenswrapper[32968]: I0309 16:57:43.211454 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-console-config\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.211637 master-0 kubenswrapper[32968]: I0309 16:57:43.211566 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-trusted-ca-bundle\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.211718 master-0 kubenswrapper[32968]: I0309 16:57:43.211687 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-oauth-serving-cert\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.212398 master-0 kubenswrapper[32968]: I0309 16:57:43.211997 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vn8\" (UniqueName: \"kubernetes.io/projected/81a22103-65ff-4882-9981-b52f2e431eb5-kube-api-access-n2vn8\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.212398 master-0 kubenswrapper[32968]: I0309 16:57:43.212098 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-service-ca\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.212398 master-0 kubenswrapper[32968]: I0309 16:57:43.212133 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:43.213819 master-0 kubenswrapper[32968]: I0309 16:57:43.213799 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-console-config\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.213984 master-0 kubenswrapper[32968]: I0309 16:57:43.213883 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-oauth-serving-cert\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.215459 master-0 kubenswrapper[32968]: I0309 16:57:43.215413 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a22103-65ff-4882-9981-b52f2e431eb5-trusted-ca-bundle\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.215652 master-0 kubenswrapper[32968]: I0309 16:57:43.215606 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81a22103-65ff-4882-9981-b52f2e431eb5-console-serving-cert\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.217074 master-0 kubenswrapper[32968]: I0309 16:57:43.217032 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81a22103-65ff-4882-9981-b52f2e431eb5-console-oauth-config\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.220365 master-0 kubenswrapper[32968]: I0309 16:57:43.220288 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ec2d3d82-8d4d-43a9-830d-f186bd072b1d-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-qtg4t\" (UID: \"ec2d3d82-8d4d-43a9-830d-f186bd072b1d\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:43.241871 master-0 kubenswrapper[32968]: I0309 16:57:43.241747 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vn8\" (UniqueName: \"kubernetes.io/projected/81a22103-65ff-4882-9981-b52f2e431eb5-kube-api-access-n2vn8\") pod \"console-566b6d8768-r99qh\" (UID: \"81a22103-65ff-4882-9981-b52f2e431eb5\") " pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.345473 master-0 kubenswrapper[32968]: I0309 16:57:43.345051 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:43.416767 master-0 kubenswrapper[32968]: I0309 16:57:43.416632 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:43.433619 master-0 kubenswrapper[32968]: I0309 16:57:43.422505 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1cfc4434-2f72-43b7-8a57-4843c0eb4cf9-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-q6r68\" (UID: \"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:43.447457 master-0 kubenswrapper[32968]: I0309 16:57:43.441586 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:43.447457 master-0 kubenswrapper[32968]: I0309 16:57:43.443340 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" Mar 09 16:57:43.479937 master-0 kubenswrapper[32968]: I0309 16:57:43.478852 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-24chs"] Mar 09 16:57:43.492455 master-0 kubenswrapper[32968]: W0309 16:57:43.490307 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64714b31_41b0_4495_a931_21f5a4e765b4.slice/crio-c49b968ff5f1e1e2b67a1d87818a40eaba274520301e431d8ceacf9b06cc80c4 WatchSource:0}: Error finding container c49b968ff5f1e1e2b67a1d87818a40eaba274520301e431d8ceacf9b06cc80c4: Status 404 returned error can't find the container with id c49b968ff5f1e1e2b67a1d87818a40eaba274520301e431d8ceacf9b06cc80c4 Mar 09 16:57:44.016723 master-0 kubenswrapper[32968]: I0309 16:57:44.016618 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-566b6d8768-r99qh"] Mar 09 16:57:44.092267 master-0 kubenswrapper[32968]: W0309 16:57:44.092013 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cfc4434_2f72_43b7_8a57_4843c0eb4cf9.slice/crio-017401ec3af3b310e45576608586c3353a4e88712b806b41e8ebbb5ae1482f60 WatchSource:0}: Error finding container 017401ec3af3b310e45576608586c3353a4e88712b806b41e8ebbb5ae1482f60: Status 404 returned error can't find the container with id 017401ec3af3b310e45576608586c3353a4e88712b806b41e8ebbb5ae1482f60 Mar 09 16:57:44.116757 master-0 kubenswrapper[32968]: I0309 16:57:44.115731 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68"] Mar 09 16:57:44.198316 master-0 kubenswrapper[32968]: I0309 16:57:44.198234 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t"] Mar 09 16:57:44.228600 master-0 kubenswrapper[32968]: I0309 16:57:44.228505 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-hkczp" event={"ID":"9fc827cd-af79-4874-8388-5d92241874c6","Type":"ContainerStarted","Data":"cf9fed6eceb20d9a1550d630eafab6c468a730711ab8498f887db854047322d6"} Mar 09 16:57:44.229751 master-0 kubenswrapper[32968]: I0309 16:57:44.229504 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:57:44.234396 master-0 kubenswrapper[32968]: I0309 16:57:44.233946 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-566b6d8768-r99qh" event={"ID":"81a22103-65ff-4882-9981-b52f2e431eb5","Type":"ContainerStarted","Data":"53b8e0db1c699dd3b4f9ab2185c9d3c83aecee7ebc50995395d328300d1c0151"} Mar 09 16:57:44.234396 master-0 kubenswrapper[32968]: I0309 16:57:44.234080 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-566b6d8768-r99qh" event={"ID":"81a22103-65ff-4882-9981-b52f2e431eb5","Type":"ContainerStarted","Data":"61a834b69482517717f497a0b4d21d78368ec7364aa0d1bda929f67326584a0a"} Mar 09 16:57:44.239496 master-0 kubenswrapper[32968]: I0309 16:57:44.239442 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" event={"ID":"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9","Type":"ContainerStarted","Data":"017401ec3af3b310e45576608586c3353a4e88712b806b41e8ebbb5ae1482f60"} Mar 09 16:57:44.244340 master-0 kubenswrapper[32968]: I0309 16:57:44.243460 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" event={"ID":"64714b31-41b0-4495-a931-21f5a4e765b4","Type":"ContainerStarted","Data":"c49b968ff5f1e1e2b67a1d87818a40eaba274520301e431d8ceacf9b06cc80c4"} Mar 09 16:57:44.248894 master-0 kubenswrapper[32968]: I0309 16:57:44.248805 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" event={"ID":"ec2d3d82-8d4d-43a9-830d-f186bd072b1d","Type":"ContainerStarted","Data":"9c9481c323e9831abf84c09751a959189ea02421a1ee0c97721702174e3141f5"} Mar 09 16:57:44.270254 master-0 kubenswrapper[32968]: I0309 16:57:44.270140 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-hkczp" podStartSLOduration=2.553359158 podStartE2EDuration="4.270030343s" podCreationTimestamp="2026-03-09 16:57:40 +0000 UTC" firstStartedPulling="2026-03-09 16:57:42.025402672 +0000 UTC m=+688.128725212" lastFinishedPulling="2026-03-09 16:57:43.742073867 +0000 UTC m=+689.845396397" observedRunningTime="2026-03-09 16:57:44.260378668 +0000 UTC m=+690.363701198" watchObservedRunningTime="2026-03-09 16:57:44.270030343 +0000 UTC m=+690.373352883" Mar 09 16:57:44.310247 master-0 kubenswrapper[32968]: I0309 16:57:44.309495 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-566b6d8768-r99qh" podStartSLOduration=2.309373253 podStartE2EDuration="2.309373253s" podCreationTimestamp="2026-03-09 16:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:57:44.290035652 +0000 UTC m=+690.393358202" watchObservedRunningTime="2026-03-09 16:57:44.309373253 +0000 UTC m=+690.412695793" Mar 09 16:57:45.267556 master-0 kubenswrapper[32968]: I0309 16:57:45.267401 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-s7jgn" event={"ID":"f7841825-700b-4d13-9d15-0d18c2e6f513","Type":"ContainerStarted","Data":"f65e9dbb1e1048171d79f952bbf0970542efd6eb1b9b0c4a67f33512af96c7ef"} Mar 09 16:57:45.269646 master-0 kubenswrapper[32968]: I0309 16:57:45.268169 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-s7jgn" Mar 09 16:57:45.301586 master-0 kubenswrapper[32968]: I0309 16:57:45.301486 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-s7jgn" podStartSLOduration=3.84055783 podStartE2EDuration="5.301402521s" podCreationTimestamp="2026-03-09 16:57:40 +0000 UTC" firstStartedPulling="2026-03-09 16:57:42.92164535 +0000 UTC m=+689.024967890" lastFinishedPulling="2026-03-09 16:57:44.382490041 +0000 UTC m=+690.485812581" observedRunningTime="2026-03-09 16:57:45.295607522 +0000 UTC m=+691.398930082" watchObservedRunningTime="2026-03-09 16:57:45.301402521 +0000 UTC m=+691.404725071" Mar 09 16:57:52.256891 master-0 kubenswrapper[32968]: I0309 16:57:52.256799 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-s7jgn" Mar 09 16:57:53.345962 master-0 kubenswrapper[32968]: I0309 16:57:53.345708 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:53.345962 master-0 kubenswrapper[32968]: I0309 16:57:53.345786 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:53.353162 master-0 kubenswrapper[32968]: I0309 16:57:53.353100 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:53.369883 master-0 kubenswrapper[32968]: I0309 16:57:53.369691 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" event={"ID":"1cfc4434-2f72-43b7-8a57-4843c0eb4cf9","Type":"ContainerStarted","Data":"8c681a1b2d3d371daf8c8fa67c2eca769589c69c64dd5d1602cbc68532006e30"} Mar 09 16:57:53.376511 master-0 kubenswrapper[32968]: I0309 16:57:53.374147 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" event={"ID":"64714b31-41b0-4495-a931-21f5a4e765b4","Type":"ContainerStarted","Data":"e2a5217d0ba87885459d23e994b4b47b8d9005899940876b8497033cc7b75fd6"} Mar 09 16:57:53.376511 master-0 kubenswrapper[32968]: I0309 16:57:53.374206 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" event={"ID":"64714b31-41b0-4495-a931-21f5a4e765b4","Type":"ContainerStarted","Data":"150009b9f6abcfda0420607d718ff751d5188994f43f4af6ef1fffcd586db4d2"} Mar 09 16:57:53.377761 master-0 kubenswrapper[32968]: I0309 16:57:53.377327 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" event={"ID":"ec2d3d82-8d4d-43a9-830d-f186bd072b1d","Type":"ContainerStarted","Data":"a623827cd3c2aa561f53166492f009164ac4eeaf6552608289a9f3b874d30748"} Mar 09 16:57:53.378638 master-0 kubenswrapper[32968]: I0309 16:57:53.378251 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:57:53.392855 master-0 kubenswrapper[32968]: I0309 16:57:53.392779 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hvvlm" event={"ID":"c5cece49-bd58-4e1f-99aa-dce6e031c9f0","Type":"ContainerStarted","Data":"81909ccda5f497e6512e25b13e51fb2ad808da2e10b1a320dd3d6d4c5f14b6bc"} Mar 09 16:57:53.393217 master-0 kubenswrapper[32968]: I0309 16:57:53.393149 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:57:53.398589 master-0 kubenswrapper[32968]: I0309 16:57:53.398503 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" event={"ID":"44941c43-d5f0-4167-8d05-26eb247a9430","Type":"ContainerStarted","Data":"8bcf305dfcf25b850ca6699a919dd89cea47958516575abacf6a59af55fdb3fe"} Mar 09 16:57:53.399667 master-0 kubenswrapper[32968]: I0309 16:57:53.399560 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:57:53.403203 master-0 kubenswrapper[32968]: I0309 16:57:53.403121 32968 generic.go:334] "Generic (PLEG): container finished" podID="5fa28417-8000-47c2-832c-a1f3558a8a11" containerID="4ce5703d57ca45cba8fd3de8fd3c04bf11e29beca1ec970cb5ac2d0cf0838d80" exitCode=0 Mar 09 16:57:53.403317 master-0 kubenswrapper[32968]: I0309 16:57:53.403227 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerDied","Data":"4ce5703d57ca45cba8fd3de8fd3c04bf11e29beca1ec970cb5ac2d0cf0838d80"} Mar 09 16:57:53.404781 master-0 kubenswrapper[32968]: I0309 16:57:53.404681 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-q6r68" podStartSLOduration=3.445062905 podStartE2EDuration="11.404599287s" podCreationTimestamp="2026-03-09 16:57:42 +0000 UTC" firstStartedPulling="2026-03-09 16:57:44.095149591 +0000 UTC m=+690.198472121" lastFinishedPulling="2026-03-09 16:57:52.054685963 +0000 UTC m=+698.158008503" observedRunningTime="2026-03-09 16:57:53.39776426 +0000 UTC m=+699.501086800" watchObservedRunningTime="2026-03-09 16:57:53.404599287 +0000 UTC m=+699.507921827" Mar 09 16:57:53.412339 master-0 kubenswrapper[32968]: I0309 16:57:53.412297 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-566b6d8768-r99qh" Mar 09 16:57:53.456790 master-0 kubenswrapper[32968]: I0309 16:57:53.456623 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" podStartSLOduration=3.60950389 podStartE2EDuration="11.456595805s" podCreationTimestamp="2026-03-09 16:57:42 +0000 UTC" firstStartedPulling="2026-03-09 16:57:44.209256734 +0000 UTC m=+690.312579274" lastFinishedPulling="2026-03-09 16:57:52.056348649 +0000 UTC m=+698.159671189" observedRunningTime="2026-03-09 16:57:53.455995409 +0000 UTC m=+699.559317959" watchObservedRunningTime="2026-03-09 16:57:53.456595805 +0000 UTC m=+699.559918365" Mar 09 16:57:53.496332 master-0 kubenswrapper[32968]: I0309 16:57:53.496223 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-24chs" podStartSLOduration=2.936924114 podStartE2EDuration="11.496181532s" podCreationTimestamp="2026-03-09 16:57:42 +0000 UTC" firstStartedPulling="2026-03-09 16:57:43.495035004 +0000 UTC m=+689.598357544" lastFinishedPulling="2026-03-09 16:57:52.054292422 +0000 UTC m=+698.157614962" observedRunningTime="2026-03-09 16:57:53.488712087 +0000 UTC m=+699.592034627" watchObservedRunningTime="2026-03-09 16:57:53.496181532 +0000 UTC m=+699.599504082" Mar 09 16:57:53.518894 master-0 kubenswrapper[32968]: I0309 16:57:53.516716 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" podStartSLOduration=4.01561189 podStartE2EDuration="14.516684415s" podCreationTimestamp="2026-03-09 16:57:39 +0000 UTC" firstStartedPulling="2026-03-09 16:57:41.630663974 +0000 UTC m=+687.733986514" lastFinishedPulling="2026-03-09 16:57:52.131736489 +0000 UTC m=+698.235059039" observedRunningTime="2026-03-09 16:57:53.514115294 +0000 UTC m=+699.617437854" watchObservedRunningTime="2026-03-09 16:57:53.516684415 +0000 UTC m=+699.620006955" Mar 09 16:57:53.621759 master-0 kubenswrapper[32968]: I0309 16:57:53.621337 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hvvlm" podStartSLOduration=2.590437911 podStartE2EDuration="11.621299668s" podCreationTimestamp="2026-03-09 16:57:42 +0000 UTC" firstStartedPulling="2026-03-09 16:57:43.023853457 +0000 UTC m=+689.127175997" lastFinishedPulling="2026-03-09 16:57:52.054715214 +0000 UTC m=+698.158037754" observedRunningTime="2026-03-09 16:57:53.618049948 +0000 UTC m=+699.721372488" watchObservedRunningTime="2026-03-09 16:57:53.621299668 +0000 UTC m=+699.724622218" Mar 09 16:57:53.665143 master-0 kubenswrapper[32968]: I0309 16:57:53.664439 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76b574c97f-rcllb"] Mar 09 16:57:54.417643 master-0 kubenswrapper[32968]: I0309 16:57:54.417559 32968 generic.go:334] "Generic (PLEG): container finished" podID="5fa28417-8000-47c2-832c-a1f3558a8a11" containerID="c58bd9efd1685afe6bd24d5fdbc3ee12cb799e97ad6c71d0752e3af9913365aa" exitCode=0 Mar 09 16:57:54.419128 master-0 kubenswrapper[32968]: I0309 16:57:54.419079 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerDied","Data":"c58bd9efd1685afe6bd24d5fdbc3ee12cb799e97ad6c71d0752e3af9913365aa"} Mar 09 16:57:55.450782 master-0 kubenswrapper[32968]: I0309 16:57:55.450698 32968 generic.go:334] "Generic (PLEG): container finished" podID="5fa28417-8000-47c2-832c-a1f3558a8a11" containerID="d5b6155a89bf0dfd695eea62436083304f322bc528156a9173659886cf6c3e69" exitCode=0 Mar 09 16:57:55.453292 master-0 kubenswrapper[32968]: I0309 16:57:55.450830 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerDied","Data":"d5b6155a89bf0dfd695eea62436083304f322bc528156a9173659886cf6c3e69"} Mar 09 16:57:56.473081 master-0 kubenswrapper[32968]: I0309 16:57:56.472985 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"48e088cea9a29f77fbef15d8af7a673f3641a5d497f2cab86452f705c0a0a4fb"} Mar 09 16:57:56.473081 master-0 kubenswrapper[32968]: I0309 16:57:56.473064 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"3fa7cefe02ca5fdf7fa65b199844a108aa35873921d6ff1fa60d76840c0324f0"} Mar 09 16:57:56.473081 master-0 kubenswrapper[32968]: I0309 16:57:56.473081 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"5f791629407c0f81b5998a807f3f34fa47b1e0e0f60a59abfa7f541da94846f5"} Mar 09 16:57:56.473081 master-0 kubenswrapper[32968]: I0309 16:57:56.473093 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"772da7f50010c4089465d0bcb67bd6f970ec5437d35aade4328f1e30e144bc47"} Mar 09 16:57:56.473081 master-0 kubenswrapper[32968]: I0309 16:57:56.473105 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"a1d11fedd25d964c1f112ffe4fdb48282eb263963f7c34cefbd4277fda22f7c4"} Mar 09 16:57:57.488730 master-0 kubenswrapper[32968]: I0309 16:57:57.488611 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l6qtv" event={"ID":"5fa28417-8000-47c2-832c-a1f3558a8a11","Type":"ContainerStarted","Data":"0a8c824362f52abf7fae3b891ff4e31cd95b39577aa50a92df67b1213cc25bbf"} Mar 09 16:57:57.490068 master-0 kubenswrapper[32968]: I0309 16:57:57.488997 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:57:57.521696 master-0 kubenswrapper[32968]: I0309 16:57:57.521572 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-l6qtv" podStartSLOduration=7.324697816 podStartE2EDuration="18.521547365s" podCreationTimestamp="2026-03-09 16:57:39 +0000 UTC" firstStartedPulling="2026-03-09 16:57:40.859556412 +0000 UTC m=+686.962878952" lastFinishedPulling="2026-03-09 16:57:52.056405961 +0000 UTC m=+698.159728501" observedRunningTime="2026-03-09 16:57:57.515740706 +0000 UTC m=+703.619063266" watchObservedRunningTime="2026-03-09 16:57:57.521547365 +0000 UTC m=+703.624869905" Mar 09 16:57:57.981237 master-0 kubenswrapper[32968]: I0309 16:57:57.981157 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hvvlm" Mar 09 16:58:00.399014 master-0 kubenswrapper[32968]: I0309 16:58:00.398957 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:58:00.446274 master-0 kubenswrapper[32968]: I0309 16:58:00.446167 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:58:01.227445 master-0 kubenswrapper[32968]: I0309 16:58:01.227346 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-hkczp" Mar 09 16:58:03.449356 master-0 kubenswrapper[32968]: I0309 16:58:03.449278 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-qtg4t" Mar 09 16:58:08.639170 master-0 kubenswrapper[32968]: I0309 16:58:08.639084 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-8xgtw"] Mar 09 16:58:08.641307 master-0 kubenswrapper[32968]: I0309 16:58:08.641275 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.644817 master-0 kubenswrapper[32968]: I0309 16:58:08.644778 32968 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 09 16:58:08.660001 master-0 kubenswrapper[32968]: I0309 16:58:08.659896 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-8xgtw"] Mar 09 16:58:08.696573 master-0 kubenswrapper[32968]: I0309 16:58:08.696481 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-registration-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.696573 master-0 kubenswrapper[32968]: I0309 16:58:08.696582 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/63397faa-c4d7-47d5-bde0-2914d46b219b-metrics-cert\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696645 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-device-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696677 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-sys\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696772 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-run-udev\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696835 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6btxd\" (UniqueName: \"kubernetes.io/projected/63397faa-c4d7-47d5-bde0-2914d46b219b-kube-api-access-6btxd\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696863 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-node-plugin-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696881 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-file-lock-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696915 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-pod-volumes-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696937 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-csi-plugin-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.697015 master-0 kubenswrapper[32968]: I0309 16:58:08.696960 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-lvmd-config\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799415 master-0 kubenswrapper[32968]: I0309 16:58:08.799322 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6btxd\" (UniqueName: \"kubernetes.io/projected/63397faa-c4d7-47d5-bde0-2914d46b219b-kube-api-access-6btxd\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799415 master-0 kubenswrapper[32968]: I0309 16:58:08.799417 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-node-plugin-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799468 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-file-lock-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799499 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-csi-plugin-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799528 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-pod-volumes-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799550 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-lvmd-config\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799607 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-registration-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799636 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/63397faa-c4d7-47d5-bde0-2914d46b219b-metrics-cert\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799677 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-device-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799722 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-sys\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.799845 master-0 kubenswrapper[32968]: I0309 16:58:08.799753 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-run-udev\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.800204 master-0 kubenswrapper[32968]: I0309 16:58:08.799866 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-run-udev\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.800204 master-0 kubenswrapper[32968]: I0309 16:58:08.799980 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-pod-volumes-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.800393 master-0 kubenswrapper[32968]: I0309 16:58:08.800354 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-lvmd-config\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.800606 master-0 kubenswrapper[32968]: I0309 16:58:08.800573 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-csi-plugin-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.800751 master-0 kubenswrapper[32968]: I0309 16:58:08.800586 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-device-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.800970 master-0 kubenswrapper[32968]: I0309 16:58:08.800934 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-node-plugin-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.801188 master-0 kubenswrapper[32968]: I0309 16:58:08.801116 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-file-lock-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.801286 master-0 kubenswrapper[32968]: I0309 16:58:08.801257 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-sys\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.801760 master-0 kubenswrapper[32968]: I0309 16:58:08.801725 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63397faa-c4d7-47d5-bde0-2914d46b219b-registration-dir\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.806520 master-0 kubenswrapper[32968]: I0309 16:58:08.806491 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/63397faa-c4d7-47d5-bde0-2914d46b219b-metrics-cert\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.833061 master-0 kubenswrapper[32968]: I0309 16:58:08.832999 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6btxd\" (UniqueName: \"kubernetes.io/projected/63397faa-c4d7-47d5-bde0-2914d46b219b-kube-api-access-6btxd\") pod \"vg-manager-8xgtw\" (UID: \"63397faa-c4d7-47d5-bde0-2914d46b219b\") " pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:08.961440 master-0 kubenswrapper[32968]: I0309 16:58:08.961347 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:09.439345 master-0 kubenswrapper[32968]: I0309 16:58:09.439238 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-8xgtw"] Mar 09 16:58:09.443847 master-0 kubenswrapper[32968]: W0309 16:58:09.443688 32968 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63397faa_c4d7_47d5_bde0_2914d46b219b.slice/crio-20f9176c920ce8422b86f14367e389c5990e8ca4eec502e099cd7f5f190b99b8 WatchSource:0}: Error finding container 20f9176c920ce8422b86f14367e389c5990e8ca4eec502e099cd7f5f190b99b8: Status 404 returned error can't find the container with id 20f9176c920ce8422b86f14367e389c5990e8ca4eec502e099cd7f5f190b99b8 Mar 09 16:58:09.620203 master-0 kubenswrapper[32968]: I0309 16:58:09.620112 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8xgtw" event={"ID":"63397faa-c4d7-47d5-bde0-2914d46b219b","Type":"ContainerStarted","Data":"9ff29763f7caf5f2f809503a83a084c39f49466b721243dbd7c977e45c5c6f90"} Mar 09 16:58:09.620203 master-0 kubenswrapper[32968]: I0309 16:58:09.620187 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8xgtw" event={"ID":"63397faa-c4d7-47d5-bde0-2914d46b219b","Type":"ContainerStarted","Data":"20f9176c920ce8422b86f14367e389c5990e8ca4eec502e099cd7f5f190b99b8"} Mar 09 16:58:09.653713 master-0 kubenswrapper[32968]: I0309 16:58:09.653579 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-8xgtw" podStartSLOduration=1.6535432490000002 podStartE2EDuration="1.653543249s" podCreationTimestamp="2026-03-09 16:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-09 16:58:09.647976316 +0000 UTC m=+715.751298866" watchObservedRunningTime="2026-03-09 16:58:09.653543249 +0000 UTC m=+715.756865799" Mar 09 16:58:10.412512 master-0 kubenswrapper[32968]: I0309 16:58:10.412191 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-l6qtv" Mar 09 16:58:11.038254 master-0 kubenswrapper[32968]: I0309 16:58:11.037110 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-plwjh" Mar 09 16:58:11.649019 master-0 kubenswrapper[32968]: I0309 16:58:11.648957 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-8xgtw_63397faa-c4d7-47d5-bde0-2914d46b219b/vg-manager/0.log" Mar 09 16:58:11.649378 master-0 kubenswrapper[32968]: I0309 16:58:11.649044 32968 generic.go:334] "Generic (PLEG): container finished" podID="63397faa-c4d7-47d5-bde0-2914d46b219b" containerID="9ff29763f7caf5f2f809503a83a084c39f49466b721243dbd7c977e45c5c6f90" exitCode=1 Mar 09 16:58:11.649378 master-0 kubenswrapper[32968]: I0309 16:58:11.649096 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8xgtw" event={"ID":"63397faa-c4d7-47d5-bde0-2914d46b219b","Type":"ContainerDied","Data":"9ff29763f7caf5f2f809503a83a084c39f49466b721243dbd7c977e45c5c6f90"} Mar 09 16:58:11.650603 master-0 kubenswrapper[32968]: I0309 16:58:11.650116 32968 scope.go:117] "RemoveContainer" containerID="9ff29763f7caf5f2f809503a83a084c39f49466b721243dbd7c977e45c5c6f90" Mar 09 16:58:12.006843 master-0 kubenswrapper[32968]: I0309 16:58:12.006653 32968 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 09 16:58:12.516858 master-0 kubenswrapper[32968]: I0309 16:58:12.516693 32968 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-09T16:58:12.006709219Z","Handler":null,"Name":""} Mar 09 16:58:12.524387 master-0 kubenswrapper[32968]: I0309 16:58:12.524322 32968 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 09 16:58:12.524387 master-0 kubenswrapper[32968]: I0309 16:58:12.524388 32968 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 09 16:58:12.666488 master-0 kubenswrapper[32968]: I0309 16:58:12.666391 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-8xgtw_63397faa-c4d7-47d5-bde0-2914d46b219b/vg-manager/0.log" Mar 09 16:58:12.666963 master-0 kubenswrapper[32968]: I0309 16:58:12.666541 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8xgtw" event={"ID":"63397faa-c4d7-47d5-bde0-2914d46b219b","Type":"ContainerStarted","Data":"8602f979e6718dd130d94c247ecc06ae951bb83f67f938240c6cf7b636a80ae3"} Mar 09 16:58:15.542460 master-0 kubenswrapper[32968]: I0309 16:58:15.541718 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v289d"] Mar 09 16:58:15.545139 master-0 kubenswrapper[32968]: I0309 16:58:15.545088 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v289d" Mar 09 16:58:15.549319 master-0 kubenswrapper[32968]: I0309 16:58:15.549254 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 09 16:58:15.551510 master-0 kubenswrapper[32968]: I0309 16:58:15.549678 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 09 16:58:15.565772 master-0 kubenswrapper[32968]: I0309 16:58:15.565258 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v289d"] Mar 09 16:58:15.692574 master-0 kubenswrapper[32968]: I0309 16:58:15.685516 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxs9q\" (UniqueName: \"kubernetes.io/projected/2d147c26-354a-4c3b-b3d5-f88c4306bd42-kube-api-access-sxs9q\") pod \"openstack-operator-index-v289d\" (UID: \"2d147c26-354a-4c3b-b3d5-f88c4306bd42\") " pod="openstack-operators/openstack-operator-index-v289d" Mar 09 16:58:15.788034 master-0 kubenswrapper[32968]: I0309 16:58:15.787906 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxs9q\" (UniqueName: \"kubernetes.io/projected/2d147c26-354a-4c3b-b3d5-f88c4306bd42-kube-api-access-sxs9q\") pod \"openstack-operator-index-v289d\" (UID: \"2d147c26-354a-4c3b-b3d5-f88c4306bd42\") " pod="openstack-operators/openstack-operator-index-v289d" Mar 09 16:58:15.813637 master-0 kubenswrapper[32968]: I0309 16:58:15.809891 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxs9q\" (UniqueName: \"kubernetes.io/projected/2d147c26-354a-4c3b-b3d5-f88c4306bd42-kube-api-access-sxs9q\") pod \"openstack-operator-index-v289d\" (UID: \"2d147c26-354a-4c3b-b3d5-f88c4306bd42\") " pod="openstack-operators/openstack-operator-index-v289d" Mar 09 16:58:15.908390 master-0 kubenswrapper[32968]: I0309 16:58:15.908283 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v289d" Mar 09 16:58:16.377962 master-0 kubenswrapper[32968]: I0309 16:58:16.377904 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v289d"] Mar 09 16:58:16.722491 master-0 kubenswrapper[32968]: I0309 16:58:16.722397 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v289d" event={"ID":"2d147c26-354a-4c3b-b3d5-f88c4306bd42","Type":"ContainerStarted","Data":"c3a342ff240184f3afb4c26a3c04ab3cb2c818178b63265409bf0f61bcf80d26"} Mar 09 16:58:18.738390 master-0 kubenswrapper[32968]: I0309 16:58:18.738322 32968 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-76b574c97f-rcllb" podUID="b29b950c-6b0f-4d86-a05f-9af9af5ebb82" containerName="console" containerID="cri-o://d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43" gracePeriod=15 Mar 09 16:58:18.963529 master-0 kubenswrapper[32968]: I0309 16:58:18.961775 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:18.965100 master-0 kubenswrapper[32968]: I0309 16:58:18.965037 32968 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:19.239770 master-0 kubenswrapper[32968]: I0309 16:58:19.239680 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76b574c97f-rcllb_b29b950c-6b0f-4d86-a05f-9af9af5ebb82/console/0.log" Mar 09 16:58:19.240016 master-0 kubenswrapper[32968]: I0309 16:58:19.239842 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:58:19.370149 master-0 kubenswrapper[32968]: I0309 16:58:19.370054 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-oauth-serving-cert\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.370149 master-0 kubenswrapper[32968]: I0309 16:58:19.370131 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-config\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.370624 master-0 kubenswrapper[32968]: I0309 16:58:19.370586 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:58:19.370703 master-0 kubenswrapper[32968]: I0309 16:58:19.370650 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-config" (OuterVolumeSpecName: "console-config") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:58:19.370860 master-0 kubenswrapper[32968]: I0309 16:58:19.370823 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-serving-cert\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.371127 master-0 kubenswrapper[32968]: I0309 16:58:19.371105 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-oauth-config\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.371306 master-0 kubenswrapper[32968]: I0309 16:58:19.371286 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-trusted-ca-bundle\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.371450 master-0 kubenswrapper[32968]: I0309 16:58:19.371412 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn6m6\" (UniqueName: \"kubernetes.io/projected/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-kube-api-access-sn6m6\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.371634 master-0 kubenswrapper[32968]: I0309 16:58:19.371616 32968 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-service-ca\") pod \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\" (UID: \"b29b950c-6b0f-4d86-a05f-9af9af5ebb82\") " Mar 09 16:58:19.371896 master-0 kubenswrapper[32968]: I0309 16:58:19.371857 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:58:19.372353 master-0 kubenswrapper[32968]: I0309 16:58:19.372320 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-service-ca" (OuterVolumeSpecName: "service-ca") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 09 16:58:19.372572 master-0 kubenswrapper[32968]: I0309 16:58:19.372552 32968 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.372661 master-0 kubenswrapper[32968]: I0309 16:58:19.372648 32968 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.372740 master-0 kubenswrapper[32968]: I0309 16:58:19.372729 32968 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.372804 master-0 kubenswrapper[32968]: I0309 16:58:19.372793 32968 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.375291 master-0 kubenswrapper[32968]: I0309 16:58:19.375191 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:58:19.376077 master-0 kubenswrapper[32968]: I0309 16:58:19.375985 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 09 16:58:19.377388 master-0 kubenswrapper[32968]: I0309 16:58:19.377321 32968 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-kube-api-access-sn6m6" (OuterVolumeSpecName: "kube-api-access-sn6m6") pod "b29b950c-6b0f-4d86-a05f-9af9af5ebb82" (UID: "b29b950c-6b0f-4d86-a05f-9af9af5ebb82"). InnerVolumeSpecName "kube-api-access-sn6m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 09 16:58:19.474309 master-0 kubenswrapper[32968]: I0309 16:58:19.474229 32968 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn6m6\" (UniqueName: \"kubernetes.io/projected/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-kube-api-access-sn6m6\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.474309 master-0 kubenswrapper[32968]: I0309 16:58:19.474273 32968 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.474309 master-0 kubenswrapper[32968]: I0309 16:58:19.474283 32968 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b29b950c-6b0f-4d86-a05f-9af9af5ebb82-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 09 16:58:19.754680 master-0 kubenswrapper[32968]: I0309 16:58:19.754516 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-76b574c97f-rcllb_b29b950c-6b0f-4d86-a05f-9af9af5ebb82/console/0.log" Mar 09 16:58:19.754680 master-0 kubenswrapper[32968]: I0309 16:58:19.754599 32968 generic.go:334] "Generic (PLEG): container finished" podID="b29b950c-6b0f-4d86-a05f-9af9af5ebb82" containerID="d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43" exitCode=2 Mar 09 16:58:19.755407 master-0 kubenswrapper[32968]: I0309 16:58:19.755355 32968 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b574c97f-rcllb" Mar 09 16:58:19.755593 master-0 kubenswrapper[32968]: I0309 16:58:19.755550 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b574c97f-rcllb" event={"ID":"b29b950c-6b0f-4d86-a05f-9af9af5ebb82","Type":"ContainerDied","Data":"d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43"} Mar 09 16:58:19.755750 master-0 kubenswrapper[32968]: I0309 16:58:19.755728 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:19.755854 master-0 kubenswrapper[32968]: I0309 16:58:19.755837 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b574c97f-rcllb" event={"ID":"b29b950c-6b0f-4d86-a05f-9af9af5ebb82","Type":"ContainerDied","Data":"083da966e25747ea49b0e8c7f67f902224aae31bfb116321cce8c69ad2333750"} Mar 09 16:58:19.755961 master-0 kubenswrapper[32968]: I0309 16:58:19.755944 32968 scope.go:117] "RemoveContainer" containerID="d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43" Mar 09 16:58:19.758879 master-0 kubenswrapper[32968]: I0309 16:58:19.758066 32968 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-8xgtw" Mar 09 16:58:19.779224 master-0 kubenswrapper[32968]: I0309 16:58:19.778869 32968 scope.go:117] "RemoveContainer" containerID="d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43" Mar 09 16:58:19.780476 master-0 kubenswrapper[32968]: E0309 16:58:19.779588 32968 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43\": container with ID starting with d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43 not found: ID does not exist" containerID="d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43" Mar 09 16:58:19.780476 master-0 kubenswrapper[32968]: I0309 16:58:19.779637 32968 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43"} err="failed to get container status \"d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43\": rpc error: code = NotFound desc = could not find container \"d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43\": container with ID starting with d96b1f9c9bc6957e172edd3832735a922714046fc516ed7351529797ee8a6b43 not found: ID does not exist" Mar 09 16:58:19.854324 master-0 kubenswrapper[32968]: I0309 16:58:19.854253 32968 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-76b574c97f-rcllb"] Mar 09 16:58:19.866221 master-0 kubenswrapper[32968]: I0309 16:58:19.862595 32968 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-76b574c97f-rcllb"] Mar 09 16:58:20.105908 master-0 kubenswrapper[32968]: I0309 16:58:20.105739 32968 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29b950c-6b0f-4d86-a05f-9af9af5ebb82" path="/var/lib/kubelet/pods/b29b950c-6b0f-4d86-a05f-9af9af5ebb82/volumes" Mar 09 17:00:16.390748 master-0 kubenswrapper[32968]: E0309 17:00:16.390660 32968 log.go:32] "PullImage from image service failed" err="rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \"http://38.102.83.80:5001/v2/\": dial tcp 38.102.83.80:5001: i/o timeout" image="38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534" Mar 09 17:00:16.391693 master-0 kubenswrapper[32968]: E0309 17:00:16.391590 32968 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \"http://38.102.83.80:5001/v2/\": dial tcp 38.102.83.80:5001: i/o timeout" image="38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534" Mar 09 17:00:16.392056 master-0 kubenswrapper[32968]: E0309 17:00:16.391981 32968 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxs9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-operator-index-v289d_openstack-operators(2d147c26-354a-4c3b-b3d5-f88c4306bd42): ErrImagePull: rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \"http://38.102.83.80:5001/v2/\": dial tcp 38.102.83.80:5001: i/o timeout" logger="UnhandledError" Mar 09 17:00:16.393556 master-0 kubenswrapper[32968]: E0309 17:00:16.393443 32968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \\\"http://38.102.83.80:5001/v2/\\\": dial tcp 38.102.83.80:5001: i/o timeout\"" pod="openstack-operators/openstack-operator-index-v289d" podUID="2d147c26-354a-4c3b-b3d5-f88c4306bd42" Mar 09 17:00:16.948231 master-0 kubenswrapper[32968]: E0309 17:00:16.947788 32968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534\\\"\"" pod="openstack-operators/openstack-operator-index-v289d" podUID="2d147c26-354a-4c3b-b3d5-f88c4306bd42" Mar 09 17:02:28.092845 master-0 kubenswrapper[32968]: E0309 17:02:28.092644 32968 log.go:32] "PullImage from image service failed" err="rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \"http://38.102.83.80:5001/v2/\": dial tcp 38.102.83.80:5001: i/o timeout" image="38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534" Mar 09 17:02:28.092845 master-0 kubenswrapper[32968]: E0309 17:02:28.092757 32968 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \"http://38.102.83.80:5001/v2/\": dial tcp 38.102.83.80:5001: i/o timeout" image="38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534" Mar 09 17:02:28.093882 master-0 kubenswrapper[32968]: E0309 17:02:28.092980 32968 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxs9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-operator-index-v289d_openstack-operators(2d147c26-354a-4c3b-b3d5-f88c4306bd42): ErrImagePull: rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \"http://38.102.83.80:5001/v2/\": dial tcp 38.102.83.80:5001: i/o timeout" logger="UnhandledError" Mar 09 17:02:28.094993 master-0 kubenswrapper[32968]: E0309 17:02:28.094938 32968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = DeadlineExceeded desc = initializing source docker://38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534: pinging container registry 38.102.83.80:5001: Get \\\"http://38.102.83.80:5001/v2/\\\": dial tcp 38.102.83.80:5001: i/o timeout\"" pod="openstack-operators/openstack-operator-index-v289d" podUID="2d147c26-354a-4c3b-b3d5-f88c4306bd42" Mar 09 17:02:39.094625 master-0 kubenswrapper[32968]: E0309 17:02:39.092124 32968 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/openstack-k8s-operators/openstack-operator-index:581e9c0db82d502a54d54052822ffac7bccc0534\\\"\"" pod="openstack-operators/openstack-operator-index-v289d" podUID="2d147c26-354a-4c3b-b3d5-f88c4306bd42" Mar 09 17:02:54.094337 master-0 kubenswrapper[32968]: I0309 17:02:54.093892 32968 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 09 17:03:27.695288 master-0 kubenswrapper[32968]: I0309 17:03:27.695074 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kcjcm/must-gather-vqj8n"] Mar 09 17:03:27.696176 master-0 kubenswrapper[32968]: E0309 17:03:27.695807 32968 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29b950c-6b0f-4d86-a05f-9af9af5ebb82" containerName="console" Mar 09 17:03:27.696176 master-0 kubenswrapper[32968]: I0309 17:03:27.695871 32968 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29b950c-6b0f-4d86-a05f-9af9af5ebb82" containerName="console" Mar 09 17:03:27.696252 master-0 kubenswrapper[32968]: I0309 17:03:27.696195 32968 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29b950c-6b0f-4d86-a05f-9af9af5ebb82" containerName="console" Mar 09 17:03:27.697752 master-0 kubenswrapper[32968]: I0309 17:03:27.697712 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.700609 master-0 kubenswrapper[32968]: I0309 17:03:27.700537 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kcjcm"/"kube-root-ca.crt" Mar 09 17:03:27.700902 master-0 kubenswrapper[32968]: I0309 17:03:27.700874 32968 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kcjcm"/"openshift-service-ca.crt" Mar 09 17:03:27.716688 master-0 kubenswrapper[32968]: I0309 17:03:27.716629 32968 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kcjcm/must-gather-pr8mv"] Mar 09 17:03:27.718663 master-0 kubenswrapper[32968]: I0309 17:03:27.718624 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:27.735637 master-0 kubenswrapper[32968]: I0309 17:03:27.735566 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kcjcm/must-gather-vqj8n"] Mar 09 17:03:27.751472 master-0 kubenswrapper[32968]: I0309 17:03:27.751356 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kcjcm/must-gather-pr8mv"] Mar 09 17:03:27.852963 master-0 kubenswrapper[32968]: I0309 17:03:27.852881 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91d103b7-4e89-4b52-8bfc-20381698c84f-must-gather-output\") pod \"must-gather-vqj8n\" (UID: \"91d103b7-4e89-4b52-8bfc-20381698c84f\") " pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.853213 master-0 kubenswrapper[32968]: I0309 17:03:27.852993 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct76k\" (UniqueName: \"kubernetes.io/projected/91d103b7-4e89-4b52-8bfc-20381698c84f-kube-api-access-ct76k\") pod \"must-gather-vqj8n\" (UID: \"91d103b7-4e89-4b52-8bfc-20381698c84f\") " pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.853213 master-0 kubenswrapper[32968]: I0309 17:03:27.853065 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fr2\" (UniqueName: \"kubernetes.io/projected/cf6b60e1-c5b5-4fb1-a596-b5012b161c6d-kube-api-access-k6fr2\") pod \"must-gather-pr8mv\" (UID: \"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d\") " pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:27.853213 master-0 kubenswrapper[32968]: I0309 17:03:27.853110 32968 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf6b60e1-c5b5-4fb1-a596-b5012b161c6d-must-gather-output\") pod \"must-gather-pr8mv\" (UID: \"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d\") " pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:27.960450 master-0 kubenswrapper[32968]: I0309 17:03:27.960225 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91d103b7-4e89-4b52-8bfc-20381698c84f-must-gather-output\") pod \"must-gather-vqj8n\" (UID: \"91d103b7-4e89-4b52-8bfc-20381698c84f\") " pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.960450 master-0 kubenswrapper[32968]: I0309 17:03:27.960377 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct76k\" (UniqueName: \"kubernetes.io/projected/91d103b7-4e89-4b52-8bfc-20381698c84f-kube-api-access-ct76k\") pod \"must-gather-vqj8n\" (UID: \"91d103b7-4e89-4b52-8bfc-20381698c84f\") " pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.960857 master-0 kubenswrapper[32968]: I0309 17:03:27.960706 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fr2\" (UniqueName: \"kubernetes.io/projected/cf6b60e1-c5b5-4fb1-a596-b5012b161c6d-kube-api-access-k6fr2\") pod \"must-gather-pr8mv\" (UID: \"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d\") " pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:27.960857 master-0 kubenswrapper[32968]: I0309 17:03:27.960848 32968 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf6b60e1-c5b5-4fb1-a596-b5012b161c6d-must-gather-output\") pod \"must-gather-pr8mv\" (UID: \"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d\") " pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:27.961441 master-0 kubenswrapper[32968]: I0309 17:03:27.961383 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91d103b7-4e89-4b52-8bfc-20381698c84f-must-gather-output\") pod \"must-gather-vqj8n\" (UID: \"91d103b7-4e89-4b52-8bfc-20381698c84f\") " pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.961597 master-0 kubenswrapper[32968]: I0309 17:03:27.961572 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf6b60e1-c5b5-4fb1-a596-b5012b161c6d-must-gather-output\") pod \"must-gather-pr8mv\" (UID: \"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d\") " pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:27.984560 master-0 kubenswrapper[32968]: I0309 17:03:27.983616 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct76k\" (UniqueName: \"kubernetes.io/projected/91d103b7-4e89-4b52-8bfc-20381698c84f-kube-api-access-ct76k\") pod \"must-gather-vqj8n\" (UID: \"91d103b7-4e89-4b52-8bfc-20381698c84f\") " pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:27.986968 master-0 kubenswrapper[32968]: I0309 17:03:27.986910 32968 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fr2\" (UniqueName: \"kubernetes.io/projected/cf6b60e1-c5b5-4fb1-a596-b5012b161c6d-kube-api-access-k6fr2\") pod \"must-gather-pr8mv\" (UID: \"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d\") " pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:28.020988 master-0 kubenswrapper[32968]: I0309 17:03:28.020893 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcjcm/must-gather-vqj8n" Mar 09 17:03:28.042652 master-0 kubenswrapper[32968]: I0309 17:03:28.039959 32968 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcjcm/must-gather-pr8mv" Mar 09 17:03:28.529991 master-0 kubenswrapper[32968]: I0309 17:03:28.529889 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kcjcm/must-gather-vqj8n"] Mar 09 17:03:28.672728 master-0 kubenswrapper[32968]: I0309 17:03:28.672654 32968 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kcjcm/must-gather-pr8mv"] Mar 09 17:03:28.952738 master-0 kubenswrapper[32968]: I0309 17:03:28.952630 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcjcm/must-gather-pr8mv" event={"ID":"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d","Type":"ContainerStarted","Data":"555d8abfe35548d87406d5737485791ce2ca3d6ebf14fc8dc477cd1c02c6b50f"} Mar 09 17:03:28.954745 master-0 kubenswrapper[32968]: I0309 17:03:28.954697 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcjcm/must-gather-vqj8n" event={"ID":"91d103b7-4e89-4b52-8bfc-20381698c84f","Type":"ContainerStarted","Data":"ea1710b1ece77032d5204450ce5d7bae040eb73093bb2ad04572fa263c78f7c1"} Mar 09 17:03:30.986684 master-0 kubenswrapper[32968]: I0309 17:03:30.986557 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcjcm/must-gather-pr8mv" event={"ID":"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d","Type":"ContainerStarted","Data":"d8914cd9fe612abddf53d09d50cc129f2b9768be6aa81d2d074589660d5c4478"} Mar 09 17:03:33.021705 master-0 kubenswrapper[32968]: I0309 17:03:33.021279 32968 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcjcm/must-gather-pr8mv" event={"ID":"cf6b60e1-c5b5-4fb1-a596-b5012b161c6d","Type":"ContainerStarted","Data":"d4bdb6c60ead66649cd731a7476cab9951519e17d2570f937d42cb54ceff30e3"} Mar 09 17:03:33.105111 master-0 kubenswrapper[32968]: I0309 17:03:33.103746 32968 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kcjcm/must-gather-pr8mv" podStartSLOduration=4.721299245 podStartE2EDuration="6.103722704s" podCreationTimestamp="2026-03-09 17:03:27 +0000 UTC" firstStartedPulling="2026-03-09 17:03:28.688141244 +0000 UTC m=+1034.791463784" lastFinishedPulling="2026-03-09 17:03:30.070564713 +0000 UTC m=+1036.173887243" observedRunningTime="2026-03-09 17:03:33.100038993 +0000 UTC m=+1039.203361543" watchObservedRunningTime="2026-03-09 17:03:33.103722704 +0000 UTC m=+1039.207045254" Mar 09 17:03:35.458358 master-0 kubenswrapper[32968]: I0309 17:03:35.458285 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-sc9tf_eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/cluster-version-operator/0.log" Mar 09 17:03:35.891239 master-0 kubenswrapper[32968]: I0309 17:03:35.891085 32968 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-sc9tf_eaf7dea5-9848-41f0-bf0b-ec70ec0380f1/cluster-version-operator/1.log"